Hunter Gabbard, a PHD candidate at the University of Glasgow and the LIGO scientific collaboration, spoke to Sean Heath for our Science podcast in a first of three part series discussing different elements of machine learning. In today’s episode, we discuss the relationship between human and machine learning and how these machines can even help us better understand some of the universe’s biggest mysteries.
Sean Heath: So today I want to start off with just the basic concept of human learning. Talk to me about how something goes from an idea, to research, to somewhat of established knowledge.
Hunter Gabbard: I think to first understand that, you must first ask the question ‘how do humans learn’. When you try to make a machine think the same way as a human, there are many problems that arise. One of the biggest is having to give the machine a problem to solve. Whereas with a human, you’re not going to be told what questions to answer. With a machine you have to tell it what to solve. We haven’t gotten to the point at least with machine learning to have machines establish questions on their own. We’re able to ask those questions, yes, but the you have to come up with the problem for the machine to solve.
If someone were to figure that out, they’d be a very rich person
SH: As you mentioned one person gets a question, they test that question. Then they report those results & the process is repeated until there’s a large gathering of data that then can be looked at observed, and becomes established knowledge. Machine learning can take that process and accelerate it exponentially, but one thing machine learning can’t do that humans do naturally is look for shortcuts. Machines don’t take the initiative to find shortcuts when solving a problem, do they?
HG: It depends on how you define shortcuts. So with a machine learning algorithm, the reason they’re so much faster is that most of the power it takes to try and solve a problem on a normal program or even a normal human brain, is all done in the very beginning after training itself over and over again on some problem you’re trying to get it solved. Once it’s trained on that specific problem you then just say ‘ok I’ll feed it some new stuff it’s never seen before’, that’s when it will produce an answer. Once you’ve trained the machine on that one problem it’s sort of set in its ways you don’t have to adjust anything. You sort of set it up and it’s on it’s way.
SH: So it’s basically a constant series of layers of “if-then” statements?
HG: Yeah, it’s basically a bunch of filter to some.
And this specific variant of Machine Learning I’m referring to is deep learning. There are all different variants of machine learning out there. There are some that are simple others that are more complex like generative adversarial networks. How I like to think of them is sort of like a cops and counterfeiters game. You have two of these AI competing against the other. One neural network is the counterfeiter the other is the cop. The counterfeiter network is trying to make fake Picasso’s against which the cop network is trying to determine its authenticity. All you’re doing is feeding noise to these neural networks and as a result something pops out. What the cop network does is compare the AI product to real Picasso paintings and figure out which is which. If the cop successfully figures it out, it is rewarded. If it is fooled, it’s penalized. The counterfeiter network is also rewarded if it fools the cop. I think that’s the best analogy, it’s sort of a Darwinian adversarial evolution type deal. It’s very interesting and they’re very powerful and can make some weird things. Google used this and produced these “dream-like” images from the data it was being fed. It’s a very hot topic right now
SH:One of the things you’re doing at the LIGOSC is you’re applying those principles and processes into more in-depth study into black holes. I think with the recent passing of Stephen hawking it’s very appropriate to talk about black holes today. How is AI opening up avenues from study for you in regards to black holes?
HG: What we do at Ligo is look at gravitational waves, which were originally predicted by Einstein through his general theory of relativity. You can get these waves from very violent events like the merger of two neutron stars or, in this instance, the merger of two black holes. When you get that merger, you get these ripples in space-time that travels out after these events and a lot of energy is dispersed. Some of that energy is in the form of gravitational waves which eventually hits us. We measure these waves using essentially a very fine ruler that’s able to measure the bending and squeezing of space-time. Now, the signal is very very tiny. To measure the signal would be liken standing on earth and trying to watch a star wiggle by the width of a human hair. So because these signals are so small they’re buried in the noise produced by the detectors, so it’s a data analysis problem at that point. Right now, we have something called match template filtering. We construct different templates of gravitational wave wave forms and produce a template bank. Once we see we’ve found a signal, we run this against the cache of data and the best match gives you back the signal.
It takes slot of time to do this and could be sped up by quite a lot. One way of doing this is through deep learning. We recently published a paper discussing deep learning, a form of AI that uses multiple layers and neurons, and feeding it the raw time series from the detector and receiving and instant yes/no answer as to whether it’s a signal or noise. There’s a lot of potential in this field in applying it towards the search for gravitational waves.
SH: I just had one more question involving the growing collaboration between the physical sciences and computer scientists. It seems like such a natural fit— do you find that we’re moving more towards a spirit of collaboration and cooperation?
HG: Yeah I agree. It would be nice to have more collaboration between the two groups. Right now with machine learning applications in gravitation wave astronomy we are using techniques that have been around for a while. What I’m doing now is using those generative adversarial networks, and this is an idea proposed by Ian Goodfellow a researcher at Google Brain in 2014, so it’s a bit of old news to people in the community along with other techniques like deep learning. I think it’d be good to have more collaboration between the two because that way you can make these discoveries that much more quickly.