Jacques Mattheij

Technology, Coding and Business

Trainification

I love cycling. Sometimes I think I love it just a little bit too much. Such as the day two years ago that I took my Zephyr Lowracer for a spin and ended up in the hospital with my right leg broken in way too many pieces. It was a pretty harsh experience, one second you’re fine, the next you are flying through the air (speedbumps work on bicycles too…) and before I’d landed I already realized this was going to be a bad one. No disappointment on that front, Basically my foot was wrenched from my leg, the right leg dropped from the pedal, slowing it down and so then the bike with me on it went over that. Not a good sound, to hear your bones snapping. Before the ambulance arrived I’d set my foot myself (don’t try this at home). The ambulance guys were a bit surprised that they were called in for a broken leg because it all looked pretty good to them.

After a very gingerly ride to the hospital with an air splint around the whole affair I ended up being operated on for a really long time. Thanks to the wonders of modern medicine, the super nice nurses, caretakers, doctors and surgeons of the hospital in Hilversum I was out of the hospital in two days with about half a Home-depot hardware department in my leg. Steel plates and lots of screws. Now to heal. That took a long time, in fact it is work in progress. After a few weeks of doing nothing except hopping around on one foot and using crutches I took to cycling again. Very very carefully. This worked but it didn’t take me long to realize that I was scared out of my wits of falling again. One of the problems of being a business owner is that you really can’t afford downtime. We didn’t miss a single job because of my little accident but that was mostly because of fantastic timing, it happened in the middle of the holiday season. By the time the first customers returned I was on crutches. Another week later I was walking - slowly. But I did notice that recovery went rather slower than I wanted. So more cycling, but no more room for accidents.

After a few weeks of this I realized that it was going well as long as I could keep it up, every day at least an hour. But if I slacked off for work, weather or other reasons it was back to square #1 with everything stiff and painful.

A couple of weeks ago I finally had enough of this and decided to do something about it rather than to be frustrated and I started to research indoor options. Many of them are way too expensive and seem to be mostly designed to look good to justify their price. I didn’t care much for that. Some more searching on the various forums and I realized I needed a hybrid between a bike and a hometrainer. For a bit I messed around with duct-tape-and-glue solutions and then the other day an associate at one of the companies I work for (thanks Reinder!) mentioned the Tacx brand again, I’d already run into them before. So I got one of those, the simple ones are pretty cheap, and serve my needs well.

One of the things I noticed right away after getting it all to work was that it was extremely boring. No road to watch, no goal to strive for, no weather, just a blank wall. Listening to music helps, but it is only so much of a motivator. So what I ended up doing is this: I ‘trainified’ VLC.

Here is how it works: You want to watch a movie? Fine, but you’ll have to work for it. A minimum speed is set, say 30 Kph, and only when you are above that speed does the movie run normally. There is a movie called ‘speed’ around that theme, but there things blow up if you drop below the minimum speed. That wouldn’t go down well with the neighbors so instead what happens is that as soon as you drop below the minimum speed the movie slows down, and the sound with it. So the only way to really watch the movie is to keep going, otherwise you start missing bits. For people with perfect pitch: watch music videos, you will be extremely motivated ;)

VLC has some excellent features for this, for one you can switch off audio stretching, which means the audio changes in pitch as soon as you slow down, for another there is a remote control feature that allows you to specificy a ‘playback’ rate as a floating point value. A little transmitter on the bicycle counts wheel revolutions and a python script then converts that data to the rate which is sent on to VLC. That’s all, and it works pretty good. A two hour movie is a pretty good exercise this way.

Here are some pictures, this is an older Koga-Miyata racing bike, bought from the local bicycle shop for a song. The rear wheel has a special tire on it that is more wear resistant against this kind of use than a normal wheel would be.

This is the speed sensor in the rear wheel:

And this is the ANT+ usb radio that connects to a USB extension cable to get it out of the zone where the computer interferes with reception:

The python code for the conversion is here, it is a slight modification of one of the demos in this repository.

The order of starting is a bit tricky, in three different shells you start VLC, the speed monitor and a telnet session to connect the two:

vlc yourmovie.mp4 –intf rc –rc-host 127.0.0.1:12345

python 07-rawmessage3.py | tee out.log

tail -f out.log | telnet localhost 12345

There are very fancy programs and online options to make something like this work but they are all centered around simulating cycle trips and races and I can’t get myself to pretend I’m cycling outdoors when I’m indoors and it seems like a giant waste of time. Not that movies is much better but it does seem a bit more justifiable and the penalty for going slow is much more realistic than the penalty for cycling slow in a pretend trip or race.

So, instead of gamifying the real world trainifying entertainment. The same trick applies to all kinds of other fitness hardware, such as running machines and rowing machines. As long as you can find a part of the machine that you can attach one of those sensors to you’re good to go. Maybe a next version of this setup will include a generator on the rear wheel coupled to a bunch of fans to simulate a headwind.

Happy training. And stay away from Low Racers. If anybody is interested in a Zephyr…

Dark Patterns, The Ratchet

A dark pattern in the design world means something that is purposefully created to mislead users and to get them to perform actions against their own interest.

Since we are supposedly required to consent to giving up our privacy there has developed a cottage industry of entities doing everythign they can to obey the letter of the law while at the same time ignoring the spirit of it, or in fact turning the law on its head. Consent forms are the norm, picture your typical consent form as a very large chunk of barely legible text with a bright orange ‘Yes, give me that benefit’, and another one (much less inviting looking) saying ‘no thanks, I won’t consent’.

If that were it, then maybe it wouldn’t be so bad. But that’s really only the beginning. A good chunk of the consent forms is missing that second option, your only alternative is to figure out how to close the damn dialog without accidentally clicking the button. An even nastier version employs what I call ‘The Ratchet’. Instead of saying ‘no, no consent’ the other button will say something like ‘not now’. Just like a toddler that you tell no, no cookie the toddler will iterpret the no as ‘not now’, making it ok to ask again. 30 seconds later.

And this will continue until one day you either out of sheer frustration or by accident click ‘Yes, give me that benefit’. And instantly all memory of the consent obtained will be forgotten. There is no way back. That ‘click’ of your mouse (or tap of the finger) was the ratchet advancing one little step. Good luck finding the permission you have just about irrevocably changed in a mountain of convolution designed to lead you astray in your quest to undo your action.

Of course it would be trivial to have a log of recently given permissions and an ‘undo’ option for each of those. But there is no money in there and so you won’t find it. The ratchet has clicked and that’s all that matters, you ‘gave your consent’, time to move on. And so the company gets to claim that not only did you give your consent, you gave it willingly and obviously they would have never ever used your data without that consent.

The cumulative effect of all those little ratchets on your privacy is rather terrible, but there is no denying it: you were along for the ride and you had the option to ‘opt out’ at every step. Not just from that one dialogue: from all of them by refusing to use products that are fielded by companies that engage in these unethical practices. Say no to ‘The Ratchet’ and it’s sick family of dark patterns designed to little-by-little chip away at your privacy, kick products and companies like that to the kerb where they belong. That’s the only way to really opt out.

E-Stop and Fuel, software that keeps you awake at night

Computer code that I’m writing usually doens’t keep me up at night. After all, it’s only bits & bites and if something doesn’t work properly you can always fix it, that’s the beauty of software.

But it isn’t always like that. Plenty of computer software has the capability of causing material damage, bodily harm or even death. Which is why the software industry has always been very quick to disavow any kind of warranty for the product it creates and - strangely enough - society seems to accept this. It’s all the more worrysome because as software is ‘eating the world’ more and more software is inserting itself into all kinds of processes where safety of life, limb and property are on the line.

Because most of the sofware I write is of a pretty mundane type and has only one customer (me) this is of no great concern to me, worst case I have to re-run some program after fixing it or maybe I’ll end up with a sub-optimal result that’s good enough for me. But there are two pieces of software that I wrote that kept me awake at night worrying about what could go wrong and if there was any way in which I could anticipate the errors before they manifested themselves.

E-Stop

The first piece of software that had the remarkable property of being able to interfere with my sleep schedule was when you looked at it from the outside trivially simple: I had built a complex CAD/CAM system for metalworking, used to drive lathes and mills retrofitted with stepper motors or servos to help transition metal working shops from being all manual to CNC. I have written about this project before in ‘the start-up from hell’, which if you haven’t read it yet will give you some more background. The software was pretty revolutionary, super easy to use and in general received quite well. The electronics that it interfaced to were for the most part fairly simple, the most annoying bit in the whole affair was that we were using a glorified game computer (an Atari ST) to drive the whole thing.

The ST, for its time was a remarkable little piece of technology. It came out of the box with a 32 bit processor (Motorola 68K), a relatively large amount of RAM (up to 1M, for the time that was unheard of at that pricepoint), and all kinds of ports coming out of the back and the side of the machine. Besides the usual suspects, Centronics and Serial ports, the ST also came with two MIDI ports, two joystick ports, a harddrive connector and some other places to stick external peripherals into.

As the project progressed all these ports were occupied one-by-one until there was no socket left unused to feed information to the CPU or to take control signals back out. The first design iteration we only had two stepper motors to drive, which was trivially done with the Centronics port. A toolchanger then occupied some more bits on the Centronics port creating the need for some off-board latches. These were then immediately used to drive a bunch of relays as well to give us the ability to switch coolant pumps, warning lights and other stuff on and off. Analogue IO went to one of the MIDI ports and an encoder to determine the position of the spindle went to the remaining i/o port. Before long everything was occupied. And then, with the software stable and all hardware I/O available already in use we determined the need to have a software component to the E-Stop circuitry.

This is not some kind of overzealous form of paranoia, some of the servos we’d drive from these boards were as large as buckets and would be more than happy to crush you or rip your arm off, one of the machines we built drove a lathe with a 5 meter (15’) chuck to cut harbour crane wheels. Fuck-ups are not at all appreciated with gear like that.

So, this was bad. There literally wasn’t a single IO port left that we could have used for this in a reliable way without having to take out a piece of functionality that was already in use at various customers. And yet, the E-Stop circuitry was a hard requirement, for one the local equivalent of OHSA would not sign off on the machine without it (and rightly so), for another we overly relied on our end-users keeping their wits about them while using the machine, which is something that is a really bad idea when it comes to dealing with the aftermath of what quite possibly involved shutting down a chunk of dangerous machinery to avoid an accident. After all the ‘E’ in E-stop stands for ‘Emergency’, and once that switch gets pushed you should not assume anything at all about the state of the machine.

So, this was a bit of a brain teaser: how to reliably restart the machine after an E-stop condition is detected. The E-stop mechanism itself was super simple: mushroom switches wired in series were placed at strategic points on the machine and the equipment case, pressing any switch would latch that switch in the ‘off’ position (E-stop engaged) and to release the switch you had to rotate it. Breaking the circuit caused a relay to fall off which cut the power to all hardware driving motors, pumps and so on. That way you could very quickly disable the machine but you had to make a very conscious decision to release the E-stop condition.

The hard part was that once that situation had come and gone if you did not have each and every output port in a defined state releasing the E-stop switch would most likely lead to an instantaneous replay of the previous emergency or potentially an even bigger problem! Better yet, if the power had been cut to the computer itself the boot process was not guaranteed to put sane values onto all output ports causing the machine to malfunction the instant you tried to power it up. Staying in sync between the state of the hardware and the software was a must. Once the CAM software was up and running and had gone through its port initialization routine you could enable the relay again but there was no output port available to do this, and even if there was using a single line to drive relay would likely cause it to be activated briefly during a reboot, something you really do not want on a relay wired to ‘hold’ itself in the on position.

After many sleepless nights I hit on the following solution: analyzing the output of the 8 bit centronics port after many power up cycles it was pretty clear that even though the outputs were terribly noisy they were also quite regular. This held across all the ST’s that I could get my hands on (quite a few of them, we had a whole bunch of systems ready to be shipped). Knowing that it suggested that instead of using single lines a pattern could be made that would not occur at all while the machine was ‘off’, and that was impossible to generate when the machine was ‘on’ (to avoid accidentally triggering the sequence during an output phase). A pattern clocked out via the normal Centronics sequence (load byte, trigger the ‘strobe’ line, wait for a bit, next byte) and a decoder and a bunch of hardwired diodes to 8 bit input comparators took care of the remainder. That way the relay would never switch into the ‘on’ state by accident during a reboot or cold boot, no single wire output state could drive the relay and no regular operation could accidentally trigger the relay. All conditions satisfied.

Some of those machines sold in the 80’s are still alive today and are still running production, quite a few of them were sold and the successor lines are still being sold today (though with completely redesigned hardware and software).

Fuel

The second piece of software that kept me awake at night was a fuel estimation program for a small cargo airline operating out of Schiphol Airport in the Netherlands.

Writing software for aviation is a completely different affair than writing software for almost every other purpose. The degree of attention to correct operation, documentation, testing, resilience against operator error and so on is eye-opening if you have never done it before. I landed this job through a friend of mine who thought it was right up my alley: a really ancient system running a compiled BASIC binary on a PC needed to be re-written because the source code could not be produced, and even if it could be produced the hardware was being phased out and the development environment the software was based on no longer existed (the supplier of that particular dialect of BASIC had gone out of business). The software was getting further and further behind, the database of airports that it relied on was seriously outdated (which is a safety issue) and so on.

So this was to be a feature-for-feature and bit identical output clone of the existing system, but with auditable source code, up-to-date data and on a more modern platform. The first hurdle was the compiled binary, a friend of mine (who at the time worked at Digicash) and the person that landed me the job brought some substantial help here, they decompiled the binary in record time giving us a listing of the BASIC source code. This was a great help because it at least gave good insight into what made the original program tick, but it also showed how big the job really was: 10K+ lines of spaghetti basic interspersed with 1000’s of lines of ‘DATA’ statements without a single line of documentation to go with it to indicate what was what. The only thing that helped is that we could figure out where the various input screens were and what fields drove which parts of the computation.

To make it perfectly clear: if this software malfunctioned a 747 with an unknown load of cargo and between 5 and 7 crew members would take an unscheduled dip into some ocean somewhere so failure was really not an option and given the state of the available data it seemed that failure was very much a likely outcome.

Weeks went by, painstakingly documenting each and every input into the program, which variables held what values (after the decompilation process all variable names were two letters long without any relation to their meaning), allowed ranges (sometimes in combination with other fields), what calculations were done on them and which part of the output resulted. Complicating matters was that the BASIC runtime contained its own slightly funny floating point library which could make it very hard to get identical output from two pieces of code that should have done just that. For each and every such deviation there had to be a chunk of documentation explaining exactly what caused the deviation and what the range of such deviations would be and how this could affect the final estimate.

I learned a lot about take-off-weights, cruising altitude, trade winds, alternate airports, the different types of engines 747’s can be fitted with and how all these factors (and 100’s more) affected the fuel consumption. I have never before or after written a piece of software with so many inputs to produce only one number as the output: how much fuel to take on to fulfill all operational, legal and safety requirements, and to be able to prove that this is the case. In the end it all worked out, the software went live next to the old software for a while, consistently did a better job and eventually someone pulled the plug on the old system. No cargo 747’s flown by that company ever ended up in unscheduled mid-ocean or short-of-the-runway stops due to lack of fuel (or any other cause).

What I also learned from this job, besides all the interested details about airplanes and flying, is that you can’t take anything for granted when it comes to computing critical values. You need to know all the details of the underlying runtime environment, floating point computations, the hardware it runs on and so on. Then and only then can you sleep soundly knowing that you have ruled out all the elements that could throw a wrench in your careful computations.

In closing

I liked both of these jobs quite a bit, even though in the first the environment was (literally) murderous I learned a tremendous amount and I knew going out of those jobs I had ‘levelled up’ as a programmer. Even so, working on software when there are lives on the line is something that really opened my eyes to how incredibly irresponsible the software industry in general is when it comes to the work product. The ease with which we as an industry disavow responsibility, how casually we throw beta grade software out there that interacts directly with for instance vehicles and how little of such software is auditable has me worried.

This time it is not my software that keeps me up at night, but yours. So if you are working on such software, take your time to get it right, make sure that you have thought through all the failure modes of what you put out there and pretend that your spouse, children or other people whose lives you care about depends on it, one day it just might.

And if my words don’t convince you then please read up on the Therac 25 debacle, maybe that will do the job.

The Web in 2050

If you’re reading this page it means that you are accessing a ‘darknet’ web page. Darknets used to refer to places where illicit drugs and pornography were traded, these days it refers to lonely servers without any inbound links languishing away in dusty server rooms that people have all but forgotten about. Refusing to submit to either one of two remaining overlords these servers sit traffic less and mostly idle (load average: 0.00) except for when the daily automated back-up time rolls around. Waiting for a renaissance heralded by the arrival of a packet on port 80 or 443 of the WWW as it once was known, a place where websites freely linked to each other. Following a link felt a bit like biting into a chocolate bon-bon, you never quite knew whether you were going to like it or be disgusted by it but it would never cease to surprise you.

In 1990, when the web was first started up, there was exactly one website, http://info.cern.ch . There was nothing to link to, and nobody linking in, the page was extremely simple without fancy graphics or eye candy, just pure information. Much like this lonely server here today. In January 1991, so pretty soon after, this was followed by the first webservers outside of CERN being switched on, making it possible for the first time to traverse the ‘web’ from one institution to another.

It was about as glamorous as you’d expect any non-event to be, but the consequences would be enormous. To reference another darknet site, the long since defunct w3 standards body, whose website which is miraculously still available In March of 1993 web traffic accounted for 0.1% of all traffic, by September it was 1%, with a grand total of 623 websites available at the end of the year. Anybody with a feeling for numbers seeing these knows what’s coming next, by the end of 1994 there were 10,000 websites and a year later the number was 25000. We skip a few years to 2014 when there were a bit under a billion websites live.

So, here we are in March 2050 and as of yesterday, when Amazon gave up the fight for the open web and decided to join Google there are only two websites left. What went wrong?

Two companies deserve extra attention when it comes to murdering the web: Google and Facebook, as you all know the last two giants standing after an epic battle that lasted for decades for control of the most important resource all of us consume daily: information. Whether you’re a Googler or a Facebookie, we can all agree even if our corporate sponsor might not that it seems as if there is less choice these days. If you’re under 30 you won’t remember a time before Google or Facebook were dominant. But you will remember some of the giants of old: Microsoft, The New York Times, The Washington Post, CNN and so on, the list is endless. If you’re over 50 you might just remember the birth of Google, with their famous motto ‘Don’t be Evil’. But as we all know the road to hell is paved with good intentions, and not much later (in 2004) Facebook came along with their promise to ‘connect the world’. Never mind that it was already well connected by that time, but it sure sounded good.

Goofy kid billionaires and benevolent corporations, we were in very good hands.

But somewhere between 2010 and 2020 the tone started to change. The two giants collided with each other more and more frequently and forcefully for control of the web, and in a way the endgame could already be seen as early as 2017. Instead of merely pointing to information Facebook and Google (and many others, but they are now corpses on the battlefield) sought to make their giant audiences available with them as the front door only. Many tricks, some clean and some dirty were deployed in order to force users to consume their content via one of the portals with the original websites being reduced to the role of mere information providers.

This was quite interesting in and of itself because we’d already been there before. In the 80’s and 90’s there was a system called Viditel in Europe (and called ‘minitel’ in France) which worked quite well. The main architecture was based around Telecommunications providers (much like Google and Facebook are today, after their acquisition war of the ‘roaring 20’s’ left them in the posession of all of the worlds Telcos and by running into the ground the ones that wouldn’t budge through ‘free’ competition subsidized from other revenue streams). These telco’s would enter into contracts with information providers which in turn would give the telco’s a cut of the fee on every page. In a way today’s situation is exactly identical with one twist, the information providers provide the information for free in the hope of getting a cut of the advertising money Google or Facebook make from repackaging and in some cases reselling the content. The funny thing - to me, but it is bittersweet - is that when we finally had an open standard and everybody could be a publisher on the WWW we were so happy, it was as if we had managed to break free from a stranglehold, no longer wondering whether or not the telco would shut down our pages for writing something disagreeable, no more disputes about income due without risking being terminated (and after being terminated by both Google and Facebook, where will you go today, a darknet site?).

Alas, it all appears to have been for nought. In the ‘quiet 30’s’ the real consolidation happened, trench wars were being fought with users being given a hard choice: Join Google and your Facebook presence will be terminated and vice versa. The gloves were definitly off, it was ‘us’ versus ‘them’. Political parties clued in to this and made their bets, roughly half ended up in Google’s bin and the other half with Facebook. Zuckerberg running for president as a Republican candidate in the United States more or less forced the Democratic party to align with Google setting the stage for a further division of the web. Some proud independents tried to maintain their own presence on the web but soon faded into irrelevance. Famillies split up over the ‘Facebook or Google’ question, independent newspapers (at first joining the AMP bandwagon not realising this was the trojan horse that led to their eventual demise) ceased to exist. The Washington Post ending up with Team Google probably was predictable and may have been a big factor in Facebook going after Amazon with a passion. One by one what used to be independent webservers converted into information providers to fewer and fewer silos or risked becoming completely irrelevant. Regulators were powerless to do anything about it because each and every change was incremental, ostensibly for the greater good and after all: totally made out of free will.

The ‘silent 40’s’ were different. No longer was there any doubt about how this would all end, if the battle for the WWW had been in full swing a decade earlier this was the mopping up stage, the fight for scraps with one last big prize left. An aging Richard Stallman throwing his towel into the ring and switching off stallman.org rather than declaring allegiance to either giant was a really sad day. Amazon fought to the bitter end, trying to stay in the undivided middle ground between Google and Facebook. But dwindling turnover and Facebook’s launch of a direct competitor (‘PrimeFace’) forced their hand yesterday. And now with Google sending free Googler t-shirts to all of the former members of team-Amazon it comes to a close.

Well, maybe. There is still this website and info.cern.ch is also still up. So maybe we can reboot the web in some form after all, two sites can make a network, even if they don’t have users. Or are there? Is anybody still reading this?

Sorting 2 Tons of Lego, Many Questions, Results

For part 1, see here. For part 2, see here

Reliability

The machine is now capable of running un-attended for hours on end, which is a huge milestone. No more jamming or other nastiness that causes me to interrupt a run. Many little things contributed to this, I’ve looked at all the mechanics and figured out all the little things that went wrong one by one and have come up with solutions for them. Some of those were pretty weird, for instance very small Lego pieces went off the transport belt with such high velocities that they could end up pretty much everywhere, the solution for this was to moderate the duration of the puff relative to the size of the component and to make a skirt along the side of the transport belt to make sure that pieces don’t land on the return side of the belt which would cause them to get caught under the roller.

Speed

The machine is now twice as fast, this due to a pretty simple change, I’ve doubled the number of drop-off bins from 6 to 12, which reduces the number of passes through the machine. It now takes just 3 passes to get the Lego sorted in the categories below. This required extending the pneumatics with another 6 valves, another manifold and a bunch of wiring and tubing, the expanded controller now looks like this:

I’ve also ground off the old legs from the base (the treadmill) and welded on new ones to give a little bit more space to accomodate the new bins, but that’s pretty boring metal work.

Accuracy

The image database is now approximately 60K images of parts, this has had a positive effect on the accuracy of the recognition, fairly common parts now have very high recognition accuracy, less common parts reasonably high (> 95%), rare parts are still very poor but as more parts pass through the machine the training set gets larger and in time accuracy should improve further. Judging by the increase in accuracy from 16K images to 60K images there is something of a diminishing rate of return here and it will likely be that by the time we reach 98-99% accuracy there will be well over a million images in the training set.

Software

I’ve reduce the image size a bit in the horizontal dimension, from 640 pixels wide to 320 wide. Now that I have more data to work with this seems to give better results, the difference isn’t huge but it is definitely reproducible. The RESNET50 network still seems to give the best compromise between accuracy and training speed. I’ve added some utilities to make it easier to merge new samples into the existing dataset, to detect (and correct) classification errors and to drive the valve solenoids in a more precise way to make sure that valves reliably open and close for precisely determined amounts of time. This also helps a lot in making sure that parts don’t shoot all over the room.

Mechanics

Overall the machine works well but I’m still not happy with the hopper and the first stage belt. It is running much slower than the speed at which the parts recognizer works (30 parts / second) and I really would like to come up with something better. I’ve looked at vibrating bowl feeders and even though they are interesting they are noisy and tend to be set up for one kind of part only. They’re also too slow. If anybody has a bright idea on how to reliably feed the parts then please let me know, it’s a hard problem for which I’m sure there is some kind of elegant and simple solution. I just haven’t found one yet :)

Media

The project has had tremendous coverage from all kinds of interesting publications, IEEE Spectrum had an article, lots of internet based publications listed or linked it (for instance: Mental Floss, Endgadget, Mashable). If you have published or know about another article about the sorter please let me know and I’ll add it to the list.

Of course I have this totally backwards

Starting by buying Lego, sorting it and then thinking about how to best sell the end result is the exact opposite of how you should approach any kind of project, but truth be told I’m in this far more for the technical challenge than for the commercial part. That leaves me with a bit of a problem: The sorter is working so well now that I am actually sitting on piles of sorted Lego that would probably make someone happy. But I have absolutely no idea if the sort classes that I’m currently using are of interest.

So if you’re a fanatical Lego builder I’d very much like to hear from you how you would like to buy Lego parts in somewhat larger quantities, say from 500 Grams (roughly one pound) and up. Another thing I would like to know is where you’re located so I can figure out shipping and handling.

As you can see right now the sorting is mostly by functional groups, slopes with slopes, bricks with bricks and so on. But there are many more possibilities to sort lego, for instance by color. Please let me know in what quantity and what kind of groupings would be the most useful to you as a Lego builder.

My twitter is at twitter.com/jmattheij and my email address is jacques@mattheij.com, here are some pictures of what the current product classes look like, but these can fairly easily be changed if there is demand for other mixes.

Technic:

Fences:

Space and Aircraft:

Slopes:

Wedge plates:

Vehicle parts:

Wheels:

1 wide bricks:

1 wide plates, modified:

1 wide plates:

Hinges and couplers:

Minifigs and minifig accessories:

2 wide plates:

Tiles:

Round:

Decorated:

Arches:

Plates 6 wide:

Plates 4 wide:

Baseplates:

Bricks 2 wide:

Doors and windows:

Construction equipment:

Brackets:

Cupboards:

Bricks 1 wide, modified:

Macaroni pieces:

Corner pieces:

Turntables:

Flags:

Vegetation:

Wedges:

Helicopter blades:

Stepped pieces: