One thing stands out when you try playing with evolutionary systems.
Evolution is _really_ good at gaming the system. Unless you are very careful at specifying all of the constraints that you care about you can end up with a solution that is very clever but not quite what you had in mind. Here power consumption is the issue. If you tried to evolve a sturdy chair you might end up with something that is 1mm tall. or maybe a fuel efficient car that exploits continental drift.
For circuit simulation there are a bunch of potential pitfalls beyond power consumption. I think you would probably need to do multiple runs with components of varying values within their specified precision. I can see evolution getting some sort of benefit by exploiting the fact that two identically specced components behave _exactly_ the same way. Something that would not happen in real life.
Really this is an effect of all optimization approaches, not just "evolutionary" ones. Even simple parameter hill climbers will do that.
A fun personal example, many years ago I was trying to optimize an antenna design. I used a simple blackbox optimizer to adapt a parametrization of the geometry and a simulator to characterize the performance. I started it off and it was slowly making progress. The next day I came back and was exited to see _very good results_ ... but it turned out that it had made the length of the antenna _negative_ and the simulation was spouting nonsense (like the peak gain was a complex number). :)
The fundamental unreality of negative lengths must have resulted in me not thinking to add that as a constraint or make sure the simulation handled them gracefully... much in the same way that input fuzzing can turn up nasty bugs in otherwise well tested and competently written software.
> I think you would probably need to do multiple runs with components of varying values within their specified precision.
That's called Monte Carlo analysis. It's important for standard simulation applications as well because most components are specified with certain tolerances. Your amplifier might look stable at the expected component values, but if all of your resistors are at the bottom or top of their tolerance range, it might become unstable!
Commercial SPICE simulators (like HSPICE, for example) come with the option to perform this built-in. You can do Monte Carlo analysis with NGSPICE, but you have to organize the input and output yourself.
> I can see evolution getting some sort of benefit by exploiting the fact that two identically specced components behave _exactly_ the same way. Something that would not happen in real life.
You get pretty close with matched transistors, especially when they're on the same die. But yeah, Monte Carlo.
Another advantage of this, is that it'll smoothen the fitness landscape somewhat, creating better convergence.
Another advantage of using simulated annealing instead of evolutionary algorithms is that you can add this sort of noise in pretty much for free (instead of doing multiple runs, you basically just lower the cooling rate).
Imo even if it fails, it can come up with design and ideas you didn't think about. A lot of the time when you design something you just iterate over possible ways to do it. A computer can do this too.
> If you tried to evolve a sturdy chair you might end up with something that is 1mm tall. or maybe a fuel efficient car that exploits continental drift
great way to illustrate your point, thanks for the laugh
Karl Sims worked on evolving virtual creates at the Media Lab. He used evolutionary search to design creatures that could perform various kinds of locomotion: crawling, jumping, swimming, etc. [1] I heard the story that his original physics simulation had a bug which the evolutionary search exploited to make creatures that could fly.
From the example in the article where the design tried to draw about a quarter amp of current, you could make evolution work to help you by powering the circuit with a fixed 50 ohm resistor in series with the power supply and letting "nature take its course" WRT getting a reasonable output waveform.
Similar tricks in the "output" work for your chair and car examples.
>The plucky chip was utilizing only thirty-seven of its one hundred logic gates, and most of them were arranged in a curious collection of feedback loops. Five individual logic cells were functionally disconnected from the rest-- with no pathways that would allow them to influence the output-- yet when the researcher disabled any one of them the chip lost its ability to discriminate the tones. Furthermore, the final program did not work reliably when it was loaded onto other FPGAs of the same type.
Robustness could have been added as one extra constraint. I.e. the fitness function would also value the ability of the circuit to work on many instances of the FPGA.
This reminds me of Physically Unclonable Functions (PUFs) that tries to utilize differences in manufactured devices as a method of distinquishing unique devices. This is used for identification and authentication. There are for example PUFs that use differences in delays between logic blocks in FPGAs to generate unique responses to applied patterns/challenges.
Even with that setup you end up with overfitting after a point. I.e. instead of getting something that works on every FPGA of that type, you start getting things that work on the specific FPGAs you provide. The same thing happens in machine learning for things like recognizing photos: after a while your algorithm stops recognizing cars and starts just recognizing those specific photos of cars.
Ideally, you want to take a bunch of FPGAs, pull out a random subsample of them only for acceptance testing, and stop evolving the circuit when the performance on the acceptance testing subset starts getting worse.
Even if it worked on the specific FPGAs you provide, it might not work reliably in different temperatures like the one in the article, or if the noise is slightly further away, etc.
There is no way to stop overfitting because it's difficult if not impossible to test it in every possible environment we want it to work under.
That's basically what I said, and this is optimization not machine learning. The problem is the genetic algorithm fits the the specifics of the FPGA and the environment it is optimized in, and doesn't work reliably in other environments or FPGAs.
I love this story. It has made me wonder if you couldn't use a similar technique on a larger scale: could you write device drivers for hardware this way?
bar none this is my favorite genetic algorithm story. Not because of how it 'failed' but in the fascinatingly unexpected way it succeeded. This type of solution is one you would likely NEVER see come from classically trained EEs. That's where I see much of the value in GA/EA, finding solutions that no sane engineer in their right mind would ever come up with.
I remember reading this article a while back, and then some time later, reading another similar article, also about evolving circuits on FPGAs. (Note, by the way, that this article is only in theory evolving digital circuits: the run environment in this case is indeed analog, and the evolutionary algorithm exploited this in order to create circuits that had complex effects that would not be possible in a purely digital model!)
In the other case, the evolutionary algorithm found a long trace on the board that was to be an output from the FPGA (and so was undriven), and used it as an antenna to pick up the room's mains power and use it as a clock! I can't seem to find a reference to that one anymore, but if anyone else remembers this, I'd love to read about it again.
I remember reading this article in my High School's library a few weeks before graduation, all those years ago. It was what convinced me to major in CS. Even though it took me until last year to finally earn my BS, my very final project was inspired by that memory. I love when I see references to the article, or related work, because it is not only a reminder of my journey, but of how amazing the things we do are.
This reminded me of interesting work Cornell has done in evolving circuits that have no single points of failure [1].
One thing to notice is the emphasis on comparing the GA performance to other search methods (like hill climbing). It would be nice to see that more in blog posts like this.
This is cool. I actually sat down just today to start coding something that would help me automate my exploration of a tunable band-pass filter (or set of tunable filters which could be switched).
In my degree programme I did computer systems, not RF - but even this involved designing/building microwave amplifiers, playing with network analyzers and learning transmission line theory. I wish I could say that this work ~10 years ago has properly prepared me for the task but it seems I have many months of weekend study ahead of me...
In any case, even back then the RF students were generating pretty funky antenna designs out of modeling tools that ran for days sweeping through physical dimensions, numbers & types & topologies of elements, etc. to optimize toward some goal.
Analog Devices has a filter wizard[0] on their site, which can help to create filters using their components. It might fit your needs, or at least give you some idea of what might be required.
It's a cool tool of course, but I have an interesting set of spurs to filter out, and they're quite close to the carrier in some places, over quite a large frequency range. I have a set of fixed filters that mostly works, but the next design will need to more completely suppress these spurs over the entire range.
A bandgap circuit subtracts log(A.x) and log(B.x) to get a ratio log(A/B) that is independent of x (x>0), and to achieve this topology by evolution is a striking result. The question is whether one would trust the resultant topology in the absence of a known pattern. This is not casting a doubt on the evolutionary algorithm, it is more about the quality of the device compact models and the ability of the circuit equation solver to capture subtle instabilities and starting problems.
My guess is that today's compact models and solvers are both good enough for us to contemplate a 'Turing test' for analog circuit design.
This kind of work was published on scientific american about ~10 years ago.
Anyway this post was more detailed than the introductory on scientific america. This kind idea did not expolode over the past ten years. Something in detail might be wrong.
Most genetic algorithms break when you try to make them solve too complicated problems. They do not scale really well. Mother Nature has actually a lot of hacks to scale well. Evolution is a process that evolved by itself. If you want a genetic algorithm to solve complex problems, at one point you have to let the reproduction algorithm itself to evolve.
To give you an idea, only recently have we discovered that totally different species can exchange genetic sequences through virus (it is called horizontal gene transfer) and we are not really sure why sexual reproduction is a good idea when you can already mutate, have horizontal transfers and clone yourself.
"Make random mutations, select the best" is really not how nature works.
It's interesting to read about hypothesized age of plant (and I guess animal) species based on their molecular "clock rate" (rate of mutations) in different parts of the genome.
The mutations indeed are far from random. More important regions which fundamentally affect the viability of the organism evolve incredibly slowly and conservatively. The less important bits are less restricted and mutate more readily.
Additionally it's interesting to consider an individual in the context of its lineage. Some branches are extremely rich in diversity but mightn't have the population numbers of more "boring" clades.
It's easy to mistake success at population size as the end-game, the winner if you will, of evolution. Population size helps but at the end of the day it might be the smaller population, more diverse clades that are more adaptable to environment changes and survivability into the future in general.
Well evolution can evolve things like DNA repair, gene transfer, and sex. That's a very different thing from evolving the representation itself. Even if it did happen, it's not remotely practical for computer simulations.
(this isn't to say that evolutionary approaches are all that useful, they're usually outclassed by other optimization approaches, and where they work it's either in trivial cases or with mind-boggling amounts of computation)
First important question to ask about any evolutionary algorithm experiment:
Did you check your experiment against a simulated annealing process?
It's simpler to implement, converges faster and gives better results. So far I am unaware of any optimizing problem that was solved better with an evolutionary/genetic algorithm than simulated annealing. That is, unless the evolution part (crossover, mainly) is inextricably connected to the problem at hand and therefore other algorithms are simply not suitable to even try.
One thing to watch out with these sorts of approaches is you are actually improving on an exhaustive search of all possibilities (perhaps with some smart pruning).
What do you mean watch out? Isn't "improving on an exhaustive search of all possibilities (perhaps with some smart pruning)" a way of saying global optimization? Unrelated, great job with JuliaOpt, if that is you.
I guess what I'm getting at is a GA with 100% mutation is basically randomly generating solutions and evaluating them. Now, no one does that, but if your GA eventually explores a large amount of the search space in order to get a good solution then it suggests that its not the right approach. So yes, some sort of "global optimization" where you acknowledge you are basically going to look at the whole domain but avoid the silly cases/provably worse cases would be the alternative.
You can use it to build anything given enough time, and a proper full detailed and correct list. This includes things like: reaction when the input voltage goes dead, preventing poping, output resistance, harmonic distortion, ... If you would use this I recommend you try to use it on a small subsystem of the circuits, not the entire circuit, unless you are looking for something neat.
I however don't know how practical this all is :)
One question to bear in mind when doing this sort of thing: just because it works in SPICE doesn't mean it's going to work as intended when you build it. SPICE is very ideal world. That may or may not invalidate this approach.
Source: several years of arguing with SPICE in a professional capacity.
Unfortunately using SPICE pretty much requires knowing exactly how it works so that you understand it's limitations. In turn, this makes it hard to learn, especially for students.
At the github repo, it is mentioned to have a Linux machine to get started. What is a recent, decent Linux stack on a laptop to look at? I wanted to buy again a laptop for linux for some time already, and maybe someone here as some buying advice?
I've just started using it with Debian but I'd say the dependencies are readily available on any mainstream distro. Many Linux laptop users seem pretty happy with Lenovo, I'm typing this on an x230 - Lenovo tend to fairly consistently use well-supported chipsets.
Having said that you could use VirtualBox to get a Linux environment up fairly painlessly from Windows, if that's what you're on now.
This is a really cool way of tackling circuit design! Being able to compare the significant changes between similar designs is something I used to wish I was able to do with ease.
Evolution is _really_ good at gaming the system. Unless you are very careful at specifying all of the constraints that you care about you can end up with a solution that is very clever but not quite what you had in mind. Here power consumption is the issue. If you tried to evolve a sturdy chair you might end up with something that is 1mm tall. or maybe a fuel efficient car that exploits continental drift.
For circuit simulation there are a bunch of potential pitfalls beyond power consumption. I think you would probably need to do multiple runs with components of varying values within their specified precision. I can see evolution getting some sort of benefit by exploiting the fact that two identically specced components behave _exactly_ the same way. Something that would not happen in real life.