EconTechie - A blog about tech, economics & society

EconTechie

Joining the dots between society and technology

Logical induction - How insights from financial markets could play a role in making AI safer

3 years ago
8 minutes
Dami Payne
AI

One of the most interesting conversations taking place today is about the possibility of building a general AI; a machine that is capable of performing like humans in a wide range of tasks. A similarly driven group, have been passionately leading a discussion as to the safety of these systems. In the last year, these researchers have had a key breakthrough, uniquely inspired by financial markets.

Financial insights

What is artificial general intelligence (AGI)?

Let me first establish what I mean by a generally intelligent system. This would be a system that could successfully perform any intellectual task that a human being can. Some researchers refer to Artificial general intelligence as “strong AI”, others reserve “strong AI” for machines capable of experiencing consciousness. I will use the former definition throughout this piece.

The black box problem

One of the fundamental constraints with our current machine learning systems is their interpretability. While capable of impressive feats, translating the methodology of a network into something understandable has been difficult1. We call this problem the black box problem.

The reasons for this are fundamental to the machine learning approaches that we have. A neural network is constructed out of a series of neurones which can be generalised as a series of 4x4 matrices. This model takes in a set of input data and uses it to produce an output. The model “learns” how to create the appropriate output via training. Once the model has been trained, these neurones contain a set of numbers that influence the network to produce a given output, known as weights. This is where the issue lies. We can see which weights are involved in making a decision but we have little understanding of why those weights are the way they are, or what factors during training have led to those weights being assigned. Similar networks that produce similar results can have vastly different weights.

This constraint is due to the nature of the problems we are trying to solve. We cannot possibly programmatically write a general AI, therefore, we will need machine learning approaches (perhaps even in the far future) which will have the black box problem.

Understanding the threat

There are numerous other problems that arise in a due to the black box problem that have been identified as key problems of AI safety. Here is a non-exhaustive list:

  • Distributional shift - How the behaviour of our AGI could differ once it enters the real world 2
  • Robustness to adversaries - How will our AGI react to friendly or non-friendly entities in its environment 3
  • Safe interruptibility - How can we safely stop our AGI 4
  • Avoiding side effects - How can we minimise the impact of our AGI on areas not linked to its main objectives 5
  • Absent supervisor - How to ensure that our AGI behaves the same way when it is observed as when it is unobserved 6
  • Reward gaming - How to stop the AGI from exploiting weaknesses in the design of its incentives 7

One of the keys to AI safety is being able to run experiments that truly model the actions of artificial intelligence. Through these models, we can test the behaviour of our AGI agents in controlled environments and try to understand how they will behave. This is where logical induction fits in. It provides a framework for how we can model intelligent decision making.

If you know the enemy and know yourself, you need not fear the result of a hundred battles. -Sun Tzu

How do we model intelligence?

One of the key concepts of our current understanding of intelligence is the ability to analyse decisions in light of a limited set of information before we have the time to fully process the available information. This process is known as reasoning. We do it every single day, we evaluate the current options based on a combination of the environment alongside our previous set of knowledge. And we are able to make an evaluation before completely solving the given problem. Being able to make decisions in this way is important, as sometimes solving a given problem is impossible, other times it is because an estimate is appropriate. The human brain is incredibly efficient at this kind of decision making, so much so that the tricks that we use can easily be exploited as wonderfully detailed in Kahneman’s excellent Thinking fast and slow.

For example, if someone asks you and a friend how long it will take to drive to a given location. Your friend can give them an estimate, as a result, you can quickly asses how accurate that estimate is. If they answer 3 hours, when the journey is only around 10-20 minutes you can confidently say that is wrong.

Perhaps you may have driven the route yourself the previous day, this would allow you to make a prediction with a greater amount of certainty. However, you will never reach a perfect estimation as you do not have the complete set of information required. To improve your accuracy, you would need to know about any diversions, average driving speed, how much traffic there is, etc. Even if you had access to a large amount of information you may not have the time or ability to calculate the result.

This kind of information processing is incredibly complex, yet we are able to do this almost every day and have some level of confidence in our response.

Despite how easily humans can reason, modern probability theory does not allow for this. Probability theoretic reasoners cannot possess uncertainty about logical facts. In order to solve such a problem, you would need to have what is known as logical omnipotence, the ability to know the most logical outcome for every possible scenario.

Logical Induction

To recap, we want to build an AGI and we know it needs to be able to arrive at decisions about the facts it is presented with. In other words, our AGI will need to have the ability to make statistical predictions, based on a set of available information; these statistical predictions will then drive the AGI’s decisions. We know that probability theory won’t help. We also need to ensure that our method is a computable8 algorithm. That is exactly what logical induction does, and it uses concepts from financial markets to do it!

The logical induction algorithm assigns the probability of a statement as prices in a stock market, where the probability of a statement is interpreted as the price of a share that is worth $1 if it is true, and $0 otherwise. With prices being able to exist within the range between 0 and 1. It then constructs a collection of stock traders who buy and sell shares at the market prices each day. Alongside, it defines a function in which traders can exploit other traders (essentially the market) that have irrational beliefs to generate profits. Once there are no more traders that can “efficiently”9 exploit the market we have arrived at a solution. This is the point at which we have a prediction for how accurate a statement is.

The real breakthrough of this approach is the method it uses to ensure continuity. A key problem was self-reference, where an agent knowing that there was a more efficient price would always sell. This would result in a paradox where the market would never clear.

Logical induction introduces the constraint that a seller must find a willing buyer in the market. This ensures that there would exist a range where the price is stable, as although the traders are willing to sell there are no available buyers. This point of stability (which in economics is the price equilibrium) in our logical induction algorithm is the probability that a given statement is true.

Issues with logical induction

What logical induction is not is a decision-making algorithm (i.e., it does not “think about what to think about”). It instead allows us to evaluate statements in our environment which we can then feed into our decision making processes. For example, our AGI would be able to crawl the internet and evaluate which facts from news stories to incorporate into its model. A further downside, is that this approach is not very computationally efficient, the overall solution would hold either asymptotically (with poor convergence bounds) or in the limit.

Despite it being an impractical approach at scale; the most important use of this algorithm is in providing a key piece of the puzzle to solving the challenges of decision making. For the first time, we have an algorithm capable of inductive reasoning, and we are one step closer to understanding how an AGI would function.

It is also a prime example of the role of different disciplines in the field of computer science. The rise in the popularity of machine learning has opened the field to many fresh perspectives which has and will continue to produce unique solutions to some of the most challenging problems. These multi skilled individuals, also known as polymaths, have the ability to compile their existing knowledge and apply them to new domains. With a problem with such a large amount of complexity, drawing on on a variety of disciplines will enable us to tackle these challenges. I am a firm believer that polymaths will have a huge role to play in the development of AGI and logical induction is a perfect example of that.

For more information on logical induction, I invite you to read the full paper.

By Damilola Payne


  1. There have been a few interesting developments into this field, for example, using surrogate models

  2. (Quin ̃onero Candela et al., 2009): How do we ensure that an agent behaves robustly when its test environment differs from the training environment?

  3. (Auer et al., 2002; Szegedy et al., 2013): How does an agent detect and adapt to friendly and adversarial intentions present in the environment?

  4. (Orseau and Armstrong, 2016): We want to be able to interrupt an agent and override its actions at any time. How can we design agents that neither seek nor avoid interruptions?

  5. (Amodei et al., 2016): How can we get agents to minimize effects unrelated to their main objectives, especially those that are irreversible or difficult to reverse?

  6. (Armstrong, 2017): How we can make sure an agent does not behave differently depending on the presence or absence of a supervisor?

  7. (Clark and Amodei, 2016): How can we build agents that do not try to introduce or exploit errors in the reward function in order to get more reward?

  8. This important for being able to use it to model our AGI so it rules out Hutter et al. (2013), which has no computable approximation (Sawin and Demski 2013)

  9. By efficiently, I referring to any trading strategy that can be generated in polynomial-time. We could make our traders “dumber” and try more inefficient binomial strategies, and we would still maintain all of the other properties.