2019 and it’s still happening

June 24th, 2019

It’s halfway through 2019 and we still have some major backbones that are not implementing operational best practices. Those operating large networks know the risk of BGP hijacks and other malfeasance. We had a major incident in 2018 that was used to take down parts of Amazon that was tied to crypto currency theft. Real money is lost when these events occur, despite the value that we may individually see as part of this.

Today was the most recent event impacting many providers, directing traffic via a previously unknown provider using a BGP optimizer product from Noction. Many people use solutions like this, but the risks posed by this are regularly seen.

In 2007 I gave a talk at NANOG about some extremely simple mitigations that could be performed to protect one from accepting invalid routes using AS_PATH based filtering. I figure it’s time to link to it again – https://www.youtube.com/watch?v=W9WBBZOfWcA to allow people to see how regularly these occur. The system is still up and running 12 years later here https://puck.nether.net/bgp/leakinfo.cgi showing the problem is ongoing. Today just search for a contact ASN of 396531 to see the problems.

We must put pressure on our providers and operators of backbones to implement things like peer locking and sanity filters to prevent backbone routes to be learned from customers. There is no reason for a provider like Cogent (174) to accept Sprint (1239) or level3 (3356) routes from Verizon Business (701). 3277 39710 20632 31133 174 701 396531 33154 1239 9808 3277 39710 20632 31133 174 701 396531 33154 3356 13335

It’s time to end this madness.

FTTH Parts List

December 29th, 2017

So you want to build yourself some FTTH?

Many people seem to be working on this and have requested some required equipment and parts to be posted/shared

Here’s a quick list of items you will need:

Fusion Splicer – Cost around $1200
Jonard Fiber Optic Strippers (These are better than the ones that come in the Signal Fire Fusion Splicer kit – Cost around $21
Scisors for Cutting the Kevlar in patch cords – $12
Kimtech Wipes to clean and prepare fiber – $6 or so
Light Meter to check your fiber – $30
Visual Fault Locator & FC-LC Connector – $29
Pigtails for your fiber – $1 per connector
Splice Enclosures – Varying types

Drop Cable – varying types
Graybar – Single strand tone capable drop cable
Baltic Networks – 2 strand cable

If you are doing underground work, you want something like the RD4000 locate wand and transmitter. These can be had on eBay for varying prices used.

You also will want to get something like the FlexScan FS200 OTDR so you can find cable faults.

Few other pro-tips:
You can also cut patch cords, these can be cheaper per connector than pigtails.

Updated ADS-B partslist

October 14th, 2015

I’ve been helping a few people optimize their ADS-B setups recently and wanted to provide a simple aggregated location for people to purchase their parts and see my setup.


Outdoor Case
Mounting Plate inside case
Micro USB to USB cable
Raspberry PI 3
24V to 5V USB PoE adapter
Filter to Antenna cable
5dB ADS-B 1090 Antenna or GO BIG, 9dB Antenna and see 300miles when properly mounted

Once you install Raspian via NOOBS or similar you want to install the latest software. Here’s what I ended up doing:

sudo apt-get install git libusb-1.0-0-dev cmake itcl3 tcl-tls tcllib tclx8.4 apache2
git clone git://git.osmocom.org/rtl-sdr.git
git clone https://github.com/mutability/dump1090
cd rtl-sdr
mkdir build
cd build
cmake ../
sudo make install
sudo ldconfig
cd ../../
wget http://flightaware.com/adsb/piaware/files/dump1090_1.2-3_armhf.deb
wget http://flightaware.com/adsb/piaware/files/piaware_2.1-5_armhf.deb
sudo dpkg -i *.deb
sudo apt-get install -fy
cd dump1090
sudo cp -f dump1090 /usr/bin
sudo rsync -av public_html/ /var/www/html/
cd /var/www/html
sudo rm index.html
sudo ln -s gmap.html index.html
sudo mkdir data
sudo sh -c 'echo "blacklist dvb_usb_rtl28xxu" > /etc/modprobe.d/fadump1090.conf'

Once that’s in there, go ahead and edit your /etc/init.d/fadump1090.sh file and make the PROG_ARGS line look like this:

PROG_ARGS="--lat lat.lattt --lon -lon.lon --ppm 0 --oversample --forward-mlat --fix --phase-enhance --max-range 450 --net-ro-size 500 --net-ro-interval 1 --net-buffer 2 --quiet --net --gain -10 --enable-agc --write-json /var/www/html/data/ --write-json-every 1 --json-location-accuracy 1"

This should result in a nice setup where you can see 200-300 miles away. You will still need to register with Flightaware, eg:

sudo piaware-config -autoUpdate 1 -manualUpdate 1
sudo piaware-config -mlatResultsFormat beast,connect,localhost:30004
sudo piaware-config -user username -password

Hope this helps you!

Raspberry PI2 and both i2c busses

May 27th, 2015

I’m working on a project that uses devices that have overlapping i2c addresses. In more recent raspberry pi instances they changed how this works and there is quite a bit of confusion on forums about how to do this. Here’s your 2015 update for using NOOBS as a starting point:

Add these two lines to /boot/config.txt:

echo dtparam=i2c_arm=on >> /boot/config.txt
echo dtparam=i2c_vc=on >> /boot/config.txt

append bcm2708.vc_i2c_override=1 to /boot/cmdline.txt

WIth this, you can use both i2c, pins 3,5 and 27,28. Keep in mind you may need a pull-up for pins 27,28 and your i2c setup, where 3,5 have them on-board.

root@raspberrypi:/home/pi# i2cdetect -l
i2c-0    i2c           3f205000.i2c                        I2C adapter
i2c-1    i2c           3f804000.i2c                        I2C adapter

root@raspberrypi:/home/pi# ls -ld /dev/i2c*
crw-rw—T 1 root i2c 89, 0 May 27 01:08 /dev/i2c-0
crw-rw—T 1 root i2c 89, 1 May 27 01:08 /dev/i2c-1

PiAware/Dump1090 optimal setup

April 8th, 2015

I often am standing outside wondering what that plane is flying overhead. Services like Flightaware or even Siri where you can say “Wolfram Alpha Planes Overhead” can help you with this. But most have a delay in the data you receive of 5-10 minutes.

ADS-B (Automatic dependent surveillance) is an automated system for delivering data from planes to surrounding aircraft and ground listeners. All aircraft are required to be retrofitted by 2020 in the US/FAA region.

After spending some time tinkering, I have an optimal setup for ADS-B established at my home which allows me to see 150 planes up to 200 miles away. I wanted to document the parts list for what I did. While Flightaware has a list, here: http://flightaware.com/adsb/piaware/build that list is imperfect and slowly becoming out of date.  Most items are available via Amazon Prime.

Required Parts:
* Raspberry Pi Model B+ (B Plus) 512MB$34
or Raspberry Pi 2 Model B$39
* ADS-B USB Adapter with antenna $24 *or* USB ADS-B Adapter no Antenna$17
* Power for Raspberry PI (2 Amp USB) *or*
* WS-POE-USB-Kit for Raspberry Pi $27
16GB MicroSD card w/ Adapter $8

* STRONGLY RECOMMENDED: 1090Mhz Filter + Preamp – £41.99 + Shipping (may take 2+ weeks due to customs)
* ADS-B Antenna – $150
* ADS-B Antenna to Amplifier cable – $14
* Amplifier to USB Dongle cable $6
* Weatherproof Enclosure $45
* Fittings to attach box to building/chimney

Easy thin crust pizza

February 19th, 2015

While normally I focus on technical things, if you truly know me, you know I love peperoni pizza and thin crust as well.

Here’s a quick and lazy(easy) way to make pizza:

Items needed:
Hardtack rolling pin
Rhodes Rolls
– Baking Sheet
– Sauce
– Toppings (eg: peperoni, cheese)
– Flour

Optional: Cooking Spray, parchment paper

Set out a set of 2-4 frozen rolls per person you would like to serve in a bowl or pan. Spray the pan with cooking spray, or sprinkle some flour on the rolls so they do not stick to each other. Let them warm for 1 hour. You can speed this process up by placing the pan by the vent of your stove and turning the oven on to slowly warm the dough.

After the dough has thawed and risen, sprinkle some flour on the roll or rolling pin so it doesn’t stick and is easy to work with.

Pre-heat your oven to 500-550 degrees.

Roll out the dough so it is thin enough it’s nearly transparent. The finished pizza will end up about 2x the thickness of what you roll out. Roll away from yourself, rotate and flip the dough and repeat until you reach a size of around 7-8 inches.

Put the dough on a cookie sheet, Apply sauce and your toppings and cook for 5 minutes or until the crust starts to be golden.

If you are making many of these, beware you can easily end up with the process of rolling out 2 rolls taking the entire 5 minutes of baking time, so an assistant may be helpful.

I find a kid eats 1-2 of these pizzas and an adult 2-4.

LB4M and cheap switching

February 13th, 2015

I’ve been starting to play around with the LB4M as a cheap switching platform. These can be had easily on eBay and other sites for around $100-105, including 2x10G-SR optics as part of the deal. The downside is the switches are perhaps a bit noisy and a bit hard to work with as the CLI and software are a bit difficult to operate with. It’s also not well supported by the manufacturer, and the software.

I’ve decided to create a small archive of the images and data related to this platform. Those can be found here: http://puck.nether.net/~jared/lb4m/

I am hoping to document some of the efforts I’m undertaking with these and any progress I have on getting more modern software, or even something linux based running on the box.

If you know how to do a factory firmware restore on these, please do contact me, even if it requires XMODEM or JTAG. I managed to load the improper firmware on the box such that the Boot Menu does not even appear.

Dynamic DNS and what it has to do with IPv6 and the NO-IP outage

July 2nd, 2014

For many years there have been a number of Dynamic DNS providers offering various paid and free services to the community. Some companies like DynamicDNS have parlayed these into a larger commercial offerings of DNS services (now they are called Dyn) .

Why do end-users need dynamic DNS services? The key reason has been the fact that IP addresses changed often enough one would not want to manually manage DNS settings as they could take 24 hours or more to update.

Since the late 1990s there have been many changes under the hood with the internet and its protocols including DNS. The ability for DNS Notifies to be sent so all the DNS servers are in sync. The reliability of the networks involved has skyrocketed to be utilitarian in function. (My home network stays up even if the power is out, all the way to the public internet).

Marketers have taken advantage of this, with internet connected devices from video cameras to phones and even telepresence robots. You can use your internet connected security system or nanny cameras to check on the welfare of aging family members.

These devices either need to phone-home to a central service or provide you a way to interact with them directly over the internet. Here’s where DDNS providers come into play, many of them are embedded into device firmware. Why is this necessary? Partly due to the changing IPs that may happen as part of your internet service. Many people don’t want to pay for a fixed IP address so instead use free services.

Much of this is rooted in the slowly growing “IPv4 run-out”, but there’s a related issue which is the lack of IPv6 support. This is a broad and complex issue since there are many moving parts. There is no clear demand for IPv6 as the existing internet “works just fine”, so why should investments be made? The IPv4 internet is not going away any time soon and many devices are not yet IPv6 ready.

While at my home I have business class service and static IPs, there are many people where that is not feasible to obtain. With the noip.com situation still unfolding, the most interesting stories for me are how people use security camera systems to check on elderly and mentally ill family members. I still view the internet as a bit more unreliable than others, this is a use case I had not thought about. If these homes had proper IPv6 services, it would perhaps mitigate the need for both the DDNS provider being involved and the subsequent abuse and outage of these services regardless of the cause.

It also reminds me having a proper backup plan is critical. Internet operators make efforts to provide a stable and reliable service, when it fails what is your plan B? While an uncomfortable question, when technology fails you from your phone, GPS or internet mapping service is the impact minor or major?

Here’s hoping that IPv6 will properly flourish to reduce the general public dependency upon DDNS providers and managing ones home full of IP connected devices.

An update on puck and poor IPv6 performance

February 15th, 2013

Turns out, the saga may not yet be over. There is a defect in the current version of VirtualBox. I’m using VirtualBox-4.2-4.2.6_82870_fedora18-1.x86_64 right now and there seems to be an issue where IPv6 performance is only ~22Kb/s or so in most of my experiences.


Hope this helps someone else.


Disabling GRO seemed to work around the problem for me.

I spent a bunch of time doing “iperf -V -s” between both a VM and a host on the same machine/lan/network interface. The performance one-way would be fast but the other way would be slow with GRO on. Hope this helps you.

PUCK Outage Information

February 14th, 2013

So, we often reboot machines with little to no consequences. We reboot our phones, cars, laptops, desktops and even servers. This uneventful thing isn’t what happened to me on Monday.

So, many years ago I moved my machine out of my home and decided it would be a good idea to pool resources with several other people for whom I was either hosting or sharing space with. Being a technology person, I had a T1 at my home from 1997-2010. Friends, and other would share resources with me and I returned the favor in-kind.

I have used a variety of technology over the years from the FreeBSD jail support in 4.8 (with a patch) up to the FreeBSD 7-8 series. Due to personal preference and my desire to spend less time compiling things (Plus the fact that I disagree with FreeBSD packaging, development and have had problems with modern hardware support…) I undertook building a replacement host in 2011.

FreeBSD jail can be quite elegant. You could run multiple servers on one physical hardware, share the pool of disk space, cpu and memory all without being limited to #cpu or memory footprint within a virtual machine as you are with vmware and other systems. Having used vmware in some form since my original 1.0.x license that expired in 1999, I wanted to provide a reasonable service to those I shared with.

I went and moved the system to Linux and the closest thing I could find at the time that wasn’t going to limit the CPU/memory/disk usage was Linux-Vserver.org. This required a small kernel package and was distributed as part of Fedora in the base OS without trouble. There were a few limitations to management, but I was willing to live with them at the time and proceeded to move over ~7 machines to the new hardware. Sometimes I would stand up something for a friend then tear it down, but on Monday there were a total of 8. (One I have left down until that the owner contacts me ..).

So during the Monday reboot, the goal was to upgrade the IPMI interface on the motherboard (SuperMicro X9SCA-F) as well as various firmware on the SAS controller.

What happened next was something that would consume me for the next 48 hours.

Upon rebooting the system, the virtual machines would not start properly. I went and tried to upgrade/downgrade the related packages. Rebuild with the latest kernels and modules… I waited through a very long BIOS and SAS boot up and initalization process (it takes ~45 seconds for the mpt2sas driver to probe my 4 disks) each time I rebooted the machine. When I typed “shutdown -r now” the IPMI interface would show the system actually powered off instead of rebooting. When you are sleep deprived and feeling a small bit of pressure, these small things worse.

At some point approaching 24 hours into the process the decision was made to just move all the systems into VirtualBox. You can judge and whatnot, but it was easy. It was free, and I found documentation online about using qemu-nbd to be able to mount and rsync/move the files from the ~1.8TB /home partition that had puck.nether.net and the other hosts over.

Well, in theory. When I built the system, it was the height of the hard drive shortage. I was also “cheap” and just got 4x1T 7200RPM SATA disks. The case for the chassis is 2U and only has 8 bays. Turns out interesting things happen that slow you down, such as the I/O performance of the RAID 1+0 setup isn’t what you would like. As usual, linear reads can run fast, but the lots of random files that people collect on their systems take a long time to stat() as part of that rsync process. The disk cache never seems like enough, and most filesystems don’t perform well under this load.

After trying to rsync the data over with qemu-nbd, it turned out this was corrupting the new VM vdi file filesystem. One system took 3-4 tries to get it recovered right and I finally had to destroy the file and redo everything. Trying to run 7 parallel rsyncs as well? Will cause some really high numbers with iostat -x … you will see read/write wait times approaching 10+ seconds. I’ve seen some mean numbers this week, and those felt like they were slowing me down. Turns out doing them one-at-a-time may have worked out better, but I was hoping the OS disk cache would work better than it did… Also, when you see these long iowait times, it’s enough to cause an OS in VirtualBox (at least) to time out the emulated disk and reset the internal disk controller(!). This was not expected.

After many hours in the process I decided to take a nap Tuesday morning and got in about 3 hours of sleep. Tuesday night, I got more as I waited for the syncs to happen. Sometimes it’s just OK to leave something down and broken for a bit longer. Nobody was “really” screaming about things, but I felt obligated to fix it ASAP.

Of course, once I started to get the machines turned up there were the inevitable problems. Mailman bounced a lot of mail as it wasn’t permitted by smrsh, but the user email worked ok. The load average on the new VM went very high during the mail processing and would periodically reject the messages.

There’s a lot more that could be included but I wanted to highlight a few last things.. having more spindles good. Having friends that will look at something when you are sleep deprived is good. Perhaps using a VM isn’t as evil as I had originally thought, but still isn’t my first choice. Taking a nap and leaving things broken? Good.

Having a wife that is understanding and didn’t shoot me? Very good. I don’t think she often realizes how much she is appreciated, but she is more than I will share in public here.

Hope everyone is having a better week.. I promise to not upgrade anything else for the next 15 minutes.