It happens at least once in the lifetime of every programmer, project manager or teamleader. You get handed a steaming pile of manure, if you’re lucky only a few million lines worth, the original programmers have long ago left for sunnier places and the documentation - if there is any to begin with - is hopelessly out of sync with what is presently keeping the company afloat.
Your job: get us out of this mess.
After your first instinctive response (run for the hills) has passed you start on the project knowing full well that the eyes of the company senior leadership are on you. Failure is not an option. And yet, by the looks of what you’ve been given failure is very much in the cards. So what to do?
I’ve been (un)fortunate enough to be in this situation several times and me and a small band of friends have found that it is a lucrative business to be able to take these steaming piles of misery and to turn them into healthy maintainable projects. Here are some of the tricks that we employ:
Backup
Before you start to do anything at all make a backup of everything that might be relevant. This to make sure that no information is lost that might be of crucial importance somewhere down the line. All it takes is a silly question that you can’t answer to eat up a day or more once the change has been made. Especially configuration data is susceptible to this kind of problem, it is usually not versioned and you’re lucky if it is taken along in the periodic back-up scheme. So better safe than sorry, copy everything to a very safe place and never ever touch that unless it is in read-only mode.
Important pre-requisite, make sure you have a build process and that it actually produces what runs in production
I totally missed this step on the assumption that it is obvious and likely already in place but many HN commenters pointed this out and they are absolutely right: step one is to make sure that you know what is running in production right now and that means that you need to be able to build a version of the software that is - if your platform works that way - byte-for-byte identical with the current production build. If you can’t find a way to achieve this then likely you will be in for some unpleasant surprises once you commit something to production. Make sure you test this to the best of your ability to make sure that you have all the pieces in place and then, after you’ve gained sufficient confidence that it will work move it to production. Be prepared to switch back immediately to whatever was running before and make sure that you log everything and anything that might come in handy during the - inevitable - post mortem.
Freeze the DB
If at all possible freeze the database schema until you are done with the first level of improvements, by the time you have a solid understanding of the codebase and the legacy code has been fully left behind you are ready to modify the database schema. Change it any earlier than that and you may have a real problem on your hand, now you’ve lost the ability to run an old and a new codebase side-by-side with the database as the steady foundation to build on. Keeping the DB totally unchanged allows you to compare the effect your new business logic code has compared to the old business logic code, if it all works as advertised there should be no differences.
Write your tests
Before you make any changes at all write as many end-to-end and integration tests as you can. Make sure these tests produce the right output and test any and all assumptions that you can come up with about how you think the old stuff works (be prepared for surprises here). These tests will have two important functions: they will help to clear up any misconceptions at a very early stage and they will function as guardrails once you start writing new code to replace old code.
Automate all your testing, if you’re already experienced with CI then use it and make sure your tests run fast enough to run the full set of tests after every commit.
Instrumentation and logging
If the old platform is still available for development add instrumentation. Do this in a completely new database table, add a simple counter for every event that you can think of and add a single function to increment these counters based on the name of the event. That way you can implement a time-stamped event log with a few extra lines of code and you’ll get a good idea of how many events of one kind lead to events of another kind. One example: User opens app, User closes app. If two events should result in some back-end calls those two counters should over the long term remain at a constant difference, the difference is the number of apps currently open. If you see many more app opens than app closes you know there has to be a way in which apps end (for instance a crash). For each and every event you’ll find there is some kind of relationship to other events, usually you will strive for constant relationships unless there is an obvious error somewhere in the system. You’ll aim to reduce those counters that indicate errors and you’ll aim to maximize counters further down in the chain to the level indicated by the counters at the beginning. (For instance: customers attempting to pay should result in an equal number of actual payments received).
This very simple trick turns every backend application into a bookkeeping system of sorts and just like with a real bookkeeping system the numbers have to match, as long as they don’t you have a problem somewhere.
This system will over time become invaluable in establishing the health of the system and will be a great companion next to the source code control system revision log where you can determine the point in time that a bug was introduced and what the effect was on the various counters.
I usually keep these counters at a 5 minute resolution (so 12 buckets for an hour), but if you have an application that generates fewer or more events then you might decide to change the interval at which new buckets are created. All counters share the same database table and so each counter is simply a column in that table.
Change only one thing at the time
Do not fall into the trap of improving both the maintainability of the code or the platform it runs on at the same time as adding new features or fixing bugs. This will cause you huge headaches because you now have to ask yourself every step of the way what the desired outcome is of an action and will invalidate some of the tests you made earlier.
Platform changes
If you’ve decided to migrate the application to another platform then do this first but keep everything else exactly the same. If you want you can add more documentation or tests, but no more than that, all business logic and interdependencies should remain as before.
Architecture changes
The next thing to tackle is to change the architecture of the application (if desired). At this point in time you are free to change the higher level structure of the code, usually by reducing the number of horizontal links between modules, and thus reducing the scope of the code active during any one interaction with the end-user. If the old code was monolithic in nature now would be a good time to make it more modular, break up large functions into smaller ones but leave names of variables and data-structures as they were.
HN user mannykannot points - rightfully - out that this is not always an option, if you’re particularly unlucky then you may have to dig in deep in order to be able to make any architecture changes. I agree with that and I should have included it here so hence this little update. What I would further like to add is if you do both do high level changes and low level changes at least try to limit them to one file or worst case one subsystem so that you limit the scope of your changes as much as possible. Otherwise you might have a very hard time debugging the change you just made.
Low level refactoring
By now you should have a very good understanding of what each module does and you are ready for the real work: refactoring the code to improve maintainability and to make the code ready for new functionality. This will likely be the part of the project that consumes the most time, document as you go, do not make changes to a module until you have thoroughly documented it and feel you understand it. Feel free to rename variables and functions as well as datastructures to improve clarity and consistency, add tests (also unit tests, if the situation warrants them).
Fix bugs
Now you’re ready to take on actual end-user visible changes, the first order of battle will be the long list of bugs that have accumulated over the years in the ticket queue. As usual, first confirm the problem still exists, write a test to that effect and then fix the bug, your CI and the end-to-end tests written should keep you safe from any mistakes you make due to a lack of understanding or some peripheral issue.
Database Upgrade
If required after all this is done and you are on a solid and maintainable codebase again you have the option to change the database schema or to replace the database with a different make/model altogether if that is what you had planned to do. All the work you’ve done up to this point will help to assist you in making that change in a responsible manner without any surprises, you can completely test the new DB with the new code and all the tests in place to make sure your migration goes off without a hitch.
Execute on the roadmap
Congratulations, you are out of the woods and are now ready to implement new functionality.
Do not ever even attempt a big-bang rewrite
A big-bang rewrite is the kind of project that is pretty much guaranteed to fail. For one, you are in uncharted territory to begin with so how would you even know what to build, for another, you are pushing all the problems to the very last day, the day just before you go ‘live’ with your new system. And that’s when you’ll fail, miserably. Business logic assumptions will turn out to be faulty, suddenly you’ll gain insight into why that old system did certain things the way it did and in general you’ll end up realizing that the guys that put the old system together weren’t maybe idiots after all. If you really do want to wreck the company (and your own reputation to boot) by all means, do a big-bang rewrite, but if you’re smart about it this is not even on the table as an option.
So, the alternative, work incrementally
To untangle one of these hairballs the quickest path to safety is to take any element of the code that you do understand (it could be a peripheral bit, but it might also be some core module) and try to incrementally improve it still within the old context. If the old build tools are no longer available you will have to use some tricks (see below) but at least try to leave as much of what is known to work alive while you start with your changes. That way as the codebase improves so does your understanding of what it actually does. A typical commit should be at most a couple of lines.
Release!
Every change along the way gets released into production, even if the changes are not end-user visible it is important to make the smallest possible steps because as long as you lack understanding of the system there is a fair chance that only the production environment will tell you there is a problem. If that problem arises right after you make a small change you will gain several advantages:
it will probably be trivial to figure out what went wrong
you will be in an excellent position to improve the process
and you should immediately update the documentation to show the new insights gained
Use proxies to your advantage
If you are doing web development praise the gods and insert a proxy between the end-users and the old system. Now you have per-url control over which requests go to the old system and which you will re-route to the new system allowing much easier and more granular control over what is run and who gets to see it. If your proxy is clever enough you could probably use it to send a percentage of the traffic to the new system for an individual URL until you are satisfied that things work the way they should. If your integration tests also connect to this interface it is even better.
Yes, but all this will take too much time!
Well, that depends on how you look at it. It’s true there is a bit of re-work involved in following these steps. But it does work, and any kind of optimization of this process makes the assumption that you know more about the system than you probably do. I’ve got a reputation to maintain and I really do not like negative surprises during work like this. With some luck the company is already on the skids, or maybe there is a real danger of messing things up for the customers. In a situation like that I prefer total control and an iron clad process over saving a couple of days or weeks if that imperils a good outcome. If you’re more into cowboy stuff - and your bosses agree - then maybe it would be acceptable to take more risk, but most companies would rather take the slightly slower but much more sure road to victory.
All the software written for this project is in Python. I’m not an expert python programmer, far from it but the huge number of available libraries and the fact that I can make some sense of it all without having spent a lifetime in Python made this a fairly obvious choice. There is a python distribution called Anaconda which takes the sting out of maintaining a working python setup. Python really sucks at this, it is quite hard to resolve all the interdependencies and version issues, using ‘pip’ and the various ways in which you can set up a virtual environment is a complete nightmare once things get over a certain complexity level. Anaconda makes that all managable and it gets top marks from me for that.
The Lego sorter software consists of several main components, there is the frame grabber which takes images from the camera:
Scanner / Stitcher
Then, after the grabber has done it’s work it sends the image to the stitcher which does two things: the first thing it does is determine how much the belt with the parts on it has moved since the previoue frame (that’s the function of that wavy line in the videos in part 1, that wavy line helps to keep track of the belt position even when there are no parts on the belt), and then it will update an in-memory image with the newly scanned bit of what’s under the camera. Whenever there is a vertical break between parts the stitched part gets cut and the newly scanned part gets sent on.
After the scanner/stitcher has done its job a part image looks like this:
Stitching takes care of the situation where a part is longer than what fits under the camera in one go.
Parts Classification
This is where things get interesting. So, I’ve built this part several times now, to considerably annoyance.
OpenCV primitives
The first time I was just using OpenCV primitives, especially contour matching and circle detection. Between those two it was possible to do a reasonably accurate recognition of parts as long as there were not too many different kinds of parts. This, together with some simple meta data (l, w, h of the part) can tell the difference between all the basic lego bricks, but not much more than that.
Bayes
So, back to the drawing board: enter Bayes. Bayes classifiers are fairly well understood, you basically engineer a bunch of features, build detectors for those, create a test-set to verify that your detector works as advertised and you try to crank up the discriminating power of those features as much as you can. This you then run over as large a set of test images as you can to determine the ‘priors’ that will form the basis for the relative weighing of each feature as it is detected to be ‘true’ (feature is present) or ‘false’ (feature is not present). I used this to make a classifier based on the following features:
cross (two lines meeting somewhere in the middle)
circle (the part contains a circle larger than a stud)
edge_studs (studs visible edge-on)
full (the part occupies a large fraction of its outer perimeter)
height
holes (there are holes in the part)
holethrough (there are holes all the way through the part)
length
plate (the part is roughly a plate high)
rect (the part is rectangular)
slope (the part has a sloped portion)
skinny (the part occupies a small fraction of its outer perimeter)
square (the part is roughly square)
studs (the part has studs visible)
trans (the part is transparent)
volume (the volume of the part in cubic mm)
wedge (the part has a wedge shape)
width
And possibly others… This took quite a while. It may seem trivial to build a ‘studs detector’ but that’s not so simple. You have to keep in mind that the studs could be in any orientation, that there are many bits that look like studs but really aren’t and that the part could be upside-down or facing away from the camera. Similar problems with just about every feature so you end up tweaking a lot to get to acceptable performance for individual features. But once you have all that working you get a reasonable classifier for a much larger spectrum of parts.
Even so, this is far from perfect: it is slow, with every category you add you’re going to be doing more work in order to figure out which category a part is. The ‘best match’ can come from a library of parts which itself is growing so there is a nice geometrical element to the amount of computer time spent. Accuracy was quite impressive but in the end I abandoned this approach because of the speed (it could not keep up with the machine) and changed to the next promising candidate, an elimination based system.
Elimination
The elimination system used the same criteria as the ones listed before. Sorting the properties in decreasing order of effectiveness allowed a very rapid elimination of non-candidates, and so the remainder could be processed quite efficiently. This was the first time the software was able to keep up with the machine running full-speed.
There are a couple of problems with this approach: once something is eliminated, it won’t be back, even if it was the right part after all. The fact that it is a rather ‘binary’ approach really limits the accuracy, so you’d need a huge set of data to make this work, and that would probably reduce the overall effectiveness quite a bit.
It also ends up quite frequently eliminating all the candidates, which doesn’t help at all. So, accuracy wasn’t fantastic and fixing the accuracy would likely undo most of the speed gains.
Tree based classification
This was an interesting idea. I made a little tree along the lines of the Animal Guessing Game. Every time you add a new item to the tree it will figure out which of the features are different and it will then split the node at which the last common ancestor was found to accomodate the new part. This had some significant advantages over the elimination method: the first is that you can have a part in multiple spots in the tree which really helps accuracy. The second is that it is lightning fast compared to all the previous methods.
But it still has a significant drawback: you need to manually create all the features first and that gets really tedious, assuming you can even find ‘clear’ enough features that you can write a straight up feature detector using nothing but OpenCV primitives. And that can get challenging fast, especially because python is a rather slow language and if your problem can’t be expressed in numpy or OpenCV library calls you’ll be looking at a huge speed penalty.
Machine Learning
Finally! So, after roughly 6 months of coding up features, writing tests and scanning parts I’d had enough. I realized that there is absolutely no way that I’ll be able to write a working classifier for the complete spectrum of parts that Lego offers and that was a real let-down.
So, I decided to bite the bullet and get into machine learning in a more serious manner. For weeks I read papers, studied all kinds of interesting bits and pieces regarding Neural Networks.
I had already played with when they first became popular in the 1980’s after reading a very interesting book on a related subject. I used some of the ideas in the book to rescue a project that was due in a couple of days where someone had managed to drop a coin into the only prototype of a Novix based Forth computer that was supposed to be used for a demonstration of automatic license plate recognition. So, I hacked together a bit of C code with some DSP32 code to go with it and made the demo work, and promptly forgot about the whole thing.
A lot has happened in the land of Neural Networks since then, the most amazing thing is that the field almost died and now it is going through this incredible rennaisance powering all kinds of real world solutions. We owe all that to a guy called Geoffrey Hinton who just simply did not give up and turned the world of image classification upside down by winning a competition in a most unusual manner.
After that it seemed as if a dam had been broken, and one academic record after another was beaten with huge strides forward in accuracy for tasks that historically had been very hard for computers (vision, speech recognition, natural language processing).
So, lots of studying later I had settled on using TensorFlow, a huge library of very high quality produced by the Google Brain Team, where some of the smartest people in this field in the world are collaborating. Google has made the library open-source and it is now the foundation of lots of machine learning projects. There is a steep learning curve though, and for quite a while I found myself stuck on figuring out how to proceed best.
Within hours (yes, you read that well) I had surpassed all of the results that I had managed to painfully scrounge together feature-by-feature over the preceding months, and within several days I had the sorter working in real time for the first time with more than a few classes of parts. To really appreciate this a bit more: approximately 2000 lines of feature detection code, another 2000 or so of tests and glue was replaced by less than 200 lines of (quite readable) Keras code, both training and inference.
The speed difference and ease of coding was absolutely incredible compared to the hand-coded features. While not quite as fast as the tree mechanism accuracy was much higher and the ability to generalize the approach to many more classes without writing code for more features made for a much more predictable path.
The hard challenge to deal with next was to get a training set large enough to make working with 1000+ classes possible. At first this seemed like an insurmountable problem. I could not figure out how to make enough images and to label them by hand in acceptable time, even the most optimistic calculations had me working for 6 months or longer full-time in order to make a data set that would allow the machine to work with many classes of parts rather than just a couple.
In the end the solution was staring me in the face for at least a week before I finally clued in: it doesn’t matter. All that matters is that the machine labels its own images most of the time and then all I need to do is correct its mistakes. As it gets better there will be fewer mistakes. This very rapidly expanded the number of training images. The first day I managed to hand-label about 500 parts. The next day the machine added 2000 more, with about half of those labeled wrong. The resulting 2500 parts where the basis for the next round of training 3 days later, which resulted in 4000 more parts, 90% of which were labeled right! So I only had to correct some 400 parts, rinse, repeat… So, by the end of two weeks there was a dataset of 20K images, all labeled correctly.
This is far from enough, some classes are severely under-represented so I need to increase the number of images for those, perhaps I’ll just run a single batch consisting of nothing but those parts through the machine. No need for corrections, they’ll all be labeled identically.
I’ve had lots of help in the last week since I wrote the original post, but I’d like to call out two people by name because they’ve been instrumental in improving the software and increasing my knowledge, the first is Jeremy Howard, who has gone over and beyond the call of duty to fill in the gaps in my knowledge, without his course I would have never gotten off the ground in the first place, and second Francois Chollet, the maker of Keras who has been extremely helpful in providing a custom version of his Xception model to help speed up training.
Right now training speed is the bottle-neck, and even though my Nvidia GPU is fast it is not nearly as fast as I would like it to be. It takes a few days to generate a new net from scratch but I simply don’t think it is responsible to splurge on a 4 GPU machine in order to make this project go faster. Patience is not exactly my virtue but it looks as though I’ll have to get more of it. At some point all the software and data will be made open source, but I still have a long way to go before it is ready for that.
Once the software is able to reliably classify the bulk of the parts I’ll be pusing through the huge mountain of bricks, and after that I’ll start selling off the result, both sorted parts as well as old sets.
Finally, to close off this post, an image of the very first proof-of-concept, entirely made out of Lego:
One of my uncles cursed me with the LEGO bug, when I was 6 he gave me his collection because he was going to university. My uncle and I are relatively close in age, my dad was the eldest of 8 children and he is the youngest. So, for many years I did nothing but play with lego, build all kinds of machinery and in general had a great time until I discovered electronics and computers.
So, my bricks went to my brother, who in turn gave them back to my children when they were old enough and so on. By the time we reached 2015 this had become a nice collection, but nothing you’d need machinery for to sort it.
That changed. After a trip to lego land in Denmark I noticed how even adults buy lego in vast quantities, and at prices that were considerably higher than what you might expect for what is essentially bulk ABS. Even second hand lego isn’t cheap at all, it is sold by the part on specialized websites, and by the set, the kilo or the tub on ebay.
After doing some minimal research I noticed that sets do roughly 40 euros / Kg and that bulk lego is about 10, rare parts and lego technic go for 100’s of euros per kg. So, there exists a cottage industry of people that buy lego in bulk, buy new sets and then part this all out or sort it (manually) into more desirable and thus more valuable groupings.
I figured this would be a fun thing to get in on and to build an automated sorter. Not thinking too hard I put in some bids on large lots of lego on the local ebay subsidiary and went to bed. The next morning I woke up to a rather large number of emails congratulating me on having won almost every bid (lesson 1: if you win almost all bids you are bidding too high). This was both good and bad. It was bad because it was probably too expensive and it was also bad because it was rather more than I expected. It was good because this provided enough motivation to overcome my natural inertia to actually go and build something.
And so, the adventure started. In the middle of picking up the lots of lego my van got stolen so we had to make do with an elderly espace, one lot was so large it took 3 trips to pick it all up. By the time it was done a regular garage was stacked top-to-bottom with crates and boxes of lego. Sorting this manually was never going to work, some trial bits were sorted and by my reckoning it would take several life times to get that all organized.
Computer skills to the rescue! A first proof of concept was built of - what else - lego. This was hacked together with some python code and a bunch of hardware to handle the parts. After playing around with that for a while it appeared there were several basic problems that needed to be solved, some obvious, some not so obvious. A small collection:
fake parts needed to be filtered out
There is a lot of fake lego out there. The problem is that fake lego is worth next to nothing and if a fake part is found in a lot it devalues that lot tremendously because you now have to check each and every part to make sure you don’t accidentally pass on fake lego to a customer.
discolored parts
Lego is often assembled as a set and then put on display. That’s nice, but if the display location is in the sun then the parts will slowly discolor over time. White becomes yellow, blue becomes greenish, red and yellow fade and so on. This would be fairly easy to detect if it wasn’t for the fact that lego has a lot of colors and some of the actual colors are quite close to the faded ones.
damaged parts
Not all Lego is equally strong, and some parts are so prone to breakage it is actually quite rare to find them in one piece. If you don’t want to pass on damaged parts to customers you need to have some way of identifying them and picking them out of the parts stream.
dirty parts
Most Lego that was bought was clean, but there were some lots that looked as if someone had been using them as growth substrate for interesting biological experiments. Or to house birds…
feeding lego reliably from a hopper is surprisingly hard
Lego is normally assembled by childrens hands, but a bit of gravity and some moving machine parts will sometimes do an excellent job of partially assembling a car or some other object. This tendency is especially pronounced when it comes to building bridges, and I’ve yet to find a hopper configuration wide and deep enough that a random assortment of Lego could not form a pretty sturdy bridge across the span.
The current incarnation uses a slow belt to move parts from the hopper onto a much faster belt that moves parts past the camera.
scanning parts
Scanning parts seems to be a trivial optical exercise, but there are all kinds of gotchas here. For instance, parts may be (much!) longer than what fits under the camera in one go, parts can have a color that is extremely close to the color of the background and you really need multiple views of the same part. This kept me busy for many weeks until I had a setup that actually worked.
parts classification
Once you can reliably feed your parts past the camera you have to make sense of what you’re looking at. There are 38000+ shapes and there are 100+ possible shades of color (you can roughly tell how old someone is by asking them what lego colors they remember from their youth). After messing around with carefully crafted feature detection, decision trees, bayesian classification and other tricks I’ve finally settled on training a neural net and using that to do the classification. It isn’t perfect but it is a lot easier than coding up features by hand, many lines of code, test cases and assorted maintenance headaches were replaced by a single classifier based on the VGG16 model but with some Lego specific tweaks and then trained on large numbers of images to get the error rate to something acceptable. The final result classifies a part in approximately 30 ms on a GTX1080ti Nvidia GPU. One epoch of training takes longer than I’m happy with but that only has to be done once.
distributing parts to the right bin
This also was an interesting problem, after some experimenting with servos and all kinds of mechanical pushers the final solution here was to simply put a little nozzle next to the transport belt and to measure very precisely how long it takes to move a part from the scan position to the location of the nozzles. A well placed bin then catches the part.
Building all this has been a ton of fun. As I wrote above the prototype was made from Lego, the current one is a hodge-podge of re-purposed industrial gear, copious quantities of crazy glue and a heavily modified home running trainer that provides the frame to attach all the bits and pieces to.
Note that this is by no means finished but it’s the first time that all the parts have come together and that it actually works well enough that you can push kilos of Lego through it without interruption. The hopper mechanism can still be improved a lot, there is an easy option to expand the size of the bins and there are still obvious improvements on the feeder. The whole thing runs very quiet, a large factor in that is that even though the system uses compressed air the compressor is not your regular hardware store machine but one that uses two freezer motors to very quietly fill up the reserve tank.
Here is a slow run tracking some parts so you can see how all the bits work together (it can run much faster):
A faster run, still slow enough that you can hopefully see what is going on:
I grew up in Amsterdam, which is a pretty rough town by Dutch Standards. As a kid there are all kinds of temptations and peer-pressure to join in in bad stuff is something that is hard to escape. But somehow that never was a big factor for me, computers and electronics kept me fascinated for long enough that none of that ever mattered. But being good with computers is something that sooner or later also is something that you realize can be used for bad.
For me that moment came when one of my family members showed up at my combined house-office in the summer of 1997. The car he drove was a late model E-Class Mercedes. This particular family member has a pretty checkered history. When I still lived with my mom as a kid he would show up once or twice every year, unannounced and would comment on our poor condition and would give me a large bill to go to the night store and get luxury food. Salmon, French cheese, party time. Always flashing his success and mostly pretending to be wealthy. He vowed he’d pay for my driving license which is a big deal here in NL, that costs lots of money, but then never did. This was fine by me, I could easily pay for it myself but it didn’t exactly set the stage for a relationship of trust. Also, in the years prior to this I had never seen or heard from him.
What had changed was this: a few weeks prior to the visit there had been a large newspaper article about me and one of the things that it mentioned was my skills with computers. And this must have been the reason that my family member decided that those skills were undervalued by the marketplace and I needed a bit more in terms of opportunities.
So here was his plan: he’d bring me one of those cars every week. I could drive it as long as I made sure that when it went back to him it would have 200,000 kilometers less on the counter than what it had when he brought it. Every car would come with 5000 guilders in the glove compartment, mine to keep. Now, I’m sure that this is a hard thing to relate to, but when your family, even if you hardly ever see them shows up and makes you a proposition you can’t just tell them to fuck off. Especially not when they’re dangerous people. So I had a real problem, there was no way I was going to do this but saying no wasn’t simple either.
The backstory to this is that those cars were taxis which had been used intensively in the two years that they were old and that their market value as low mileage cars was much higher than their market value with 200K+ on them.
In the end I clued in on the fact that my family member needed me because he was clueless about the difficulty factor involved. And in fact, with my love for puzzles that was the one thing that caused an itch somewhere at the back of my mind: could I do it? Interesting hack, not because it was worth a lot of money. But this also offered me an easy out: I would simply tell him that I couldn’t do it. There is no way that he would be able to know one way or another whether or not I was lying or not. Yes, 5000 guilders per week was (and still is, though we use the Euro now) a boatload of money. And they’re nice cars. But some lines you just don’t cross.
Because what I could easily see is that this would be a beginning, and a bad beginning too. You can bet that someone somewhere will lose because of crap like this. (Fortunately, now the EU has made odometer fraud illegal). You can also bet that once you’ve done this thing and accepted the payment that you’re on the hook. You are now a criminal (or at least, you should be) and that means you’re susceptible to blackmail. The next request might not be so easy to refuse and could be a lot worse in nature. So I wasn’t really tempted, and I always felt that ‘but someone else will do it if I don’t’ was a lousy excuse.
If you’re reading this as a technical person: there will always be technically clueless people who will attempt to use you and your skills as tools to commit some crime. Be sure of two things: the first is that if the game is ever up they’ll do everything they can to let you hold the bag on it and that once you’re in you won’t be getting out that easily.
It must have seemed like a good idea at the time. Facing a sizable fraction of his own party that wanted to secede from the EU David Cameron made the gambit of the century: Let’s have a referendum and get this behind us once and for all. He never for one second thought that the ‘leave’ faction would be able to win that referendum and the end result would be to cement his own position for at least another election cycle to come. Alas, for everybody involved, we now know this was an extremely costly mistake.
Amidst claims of regret and being duped the UK population is rocked by the impact of what they’ve done, but even if everybody that wanted to would be allowed to ‘switch sides’ and vote again the ‘leave’ camp would still win, but by a smaller margin.
There are a number of driving forces behind the ‘brexit’ vote, and as I watched the whole thing unfold from my (Dutch, and so EU) vantage point I tried to make a small catalog of them without assigning them any relative weights.
The EU government is spectacularly out of touch with its subjects and does a very poor job of communicating the pluses and the minuses of being part of the union. As one of those subjects, and fairly politically informed, it always amazes me how opaque ‘Brussels’ is to those that would like to know how it all functions and what options we as ordinary citizens have to influence the proceedings outside of the votes we cast. There are veritable mountains of documents about the EU, but there is no relatively accessible piece of information that gives a person with average education an idea of how it all works and what the tools at hand are. The EU is generally viewed as a cost without upside (and the main upside is that the EU is much more stable than the countries that it unites), a net negative and a draw rather than a benefit. The fact that Brussels diplomats routinely take compensation without any performance whatsoever and that corruption is perceived as being wide-spread doesn’t help either. In general, EU politics are far away from the voters boots on the ground. This is as much a real problem as one of communications and can’t be solved easily.
The UK, a former world power, has seen its position marginalized further and further over the last 5 decades. An older generation hankers back to the days long gone and would like to see Great Britain to be restored to its former glory. This is understandable, but in my opinion somewhat mis-informed. The world is a much more connected place today than it was 50 years ago and next to a unified EU with the UK as an outsider (and, if we are to believe the latest developments with England as an outsider) it is not a very important country economically. The EU is a very large economic entity and to negotiate with 27 countries individually the UK of the past had formidable clout but today the situation has changed very much and turning back the clock like this simply isn’t going to work.
Immigration, always a hot topic when things are not going well. The UK has its share of immigration issues, just like the rest of Europe. Unlike most of the rest of Europe, as an island there is the illusion that the physical borders are insulation against the issues that the rest of Europe struggles with as soon as the subject is the free movement of people. Right or wrong, it doesn’t matter, there are a lot of people in the UK that feel that ‘the foreigners took their jobs’, or that refugees are the kind of people that there simply isn’t room for. It’s a tough problem, but I highly doubt that this problem is large enough to isolate a country over from its main trade partners. On the one hand, there definitely is some truth to the downward pressure on wages from cheap competition (so when this affects you directly your vote for ‘exit’ is probably in the bag), on the other, a large influx of people that are most likely not going to be net contributors to the economy isn’t going to help either. But, and this is the bigger issue, exiting the EU will come with the requirement to re-negotiate a whole pile of treaties and the EU is most likely simply going to make all the same things that were tough to swallow pills in the past bargaining chips. And this time the UK (or what’s left of it) will not be in a position to refuse much of anything. So I highly doubt that this subject will be resolved through an exit of the UK from the EU.
Automation: Unlike immigrants vying for the jobs traditionally held by UK born blue collar workers (many of them second generation immigrants themselves) the automation wave of the last 30 years has done as much or more to damage the prospects of those that do not have a high level of education, and those that do not work in the immediate vicinity of a large population center. More and more jobs disappear through automation in almost every branch of industry. This has led to record un-employment and governments the world over (including the UK) are struggling with how to deal with this. For a laid off factory or agricultural worker it does not matter what the underlying reason for being jobless is, the frustration with the establishment to whom they would look to solve this is definitely understandable.
General protest votes against those in power seem to me to make up the remainder of the group that voted for the exit, and quite a few of those are now in the un-enviable position of having received what they wished for, a country whose leadership has already started infighting and which - to me as an outsider at least - appears to be utterly rudderless, which for a former seafaring giant is a very bad position to be in.
If the UK were a boat, it would appear as if the captain had descended into the hold with an axe and had made a giant hole in the bottom of the boat to prove that it can’t be sunk. Fortunately the UK is an island and literally sinking it is an impossibility, but the damage done dwarfs anything I’ve seen a political entity ever do to their own country.
The really puzzling thing about the composition of the ‘leave’ voters is that a very large number of them stand to be positioned squarly in the way of the blow that will land on the UK economy once the exit is a fact. I can see ‘change for change’s sake’ as an option but when it is all but a certainty that your own position will come out much worse it makes me wonder if the consequences have been thought through.
Junker & co are happy to finally kick the naughty kid out of the class, and even though I understand their position I’d like to caution them not to be too rash, it’s just another example of the EU doing what it does best: to decide without any visible kind of proces behind the decision, and I don’t recall voting for Juncker. For one a very large chunk of the UK voted ‘remain’ and to push the UK to exit too fast could very well alienate this extremely important faction within the UK, for another, it would appear that France and Germany would like to see the UK cut up into pieces or to no longer be a factor of note in EU politics so they can drive their plans forward unimpeded.
The damage is done, I for one would very much like to see restraint on the part of the EU leadership on how they deal with the self-inflicted crisis in the UK and to limit the damage where possible. If the UK loses some of its special status then that would be acceptable, but to push the UK out when it may be possible to retain it - or a large fraction of it - through some kind of compromise would be a mistake worthy of a Cameron, and we already know how that ended.