This talk is bout the power of randomness in computation.
Randomness has proven to be a valuable resource in computer science.
There are several computational tasks that we do not know how to solve deterministically, but that we can solve if we have access to a source of randomness, for example coin tosses, and allow a tiny probability of error, say 1%.
In the area of algorithms we can find examples ranging from network problems to number-theoretic problems.
There are entire fields, like learning theory or crypto that can hardly be defined w/out randomness.
And of course randomness has found wide use in complexity theory to study relationships between resources.
Thus we see that randomness is very useful, and the question that I want to address in this talk is whether randomness is actually necessary.
This is the question that is studied by a beautiful field called derandomization.
Derandomization studies the possibility and => of removing randomness from computation.
Before being more precise, let me motivate the study of derandomization.
Of course there is the philosophical question of understanding randomness per se, but let me give you some motivation from the area of algorithms.
Derandomization has led to several algor. breakthroughs, and now I am going to mention two.
The first breakthrough is a beautiful result...
this problem was open for twenty years. And a problem that was open for even longer is the complexity of deciding whether a number is prime.
Both these breakthroughs directly originated from the study of derandomization.
---
??
Now that I have motivated derandomization, let us understand what we mean by derandomization.
Besides the philosop