# Program

This half year program is organized around four topics:
Complexity of numerican computationshomotopy methodsoptimization and interior point methodsdifferential equations.
The main emphasis is on the relationships among these topics.
Solving systems of equations is among the oldest and thorniest problems of mathematics. During centuries mathematicians have alternated between 'negative' and 'positive' results. An example of a negative result is the famous theorem of Abel and Galois, who proved that there exist polynomials of degree 5 in one variable whose zeros cannot be expressed by means of the four basic arithmetic operations and root extractions. A positive result of tremendous applicability is Newton's method. It associates to a function f (e.g. one of the polynomials just mentioned) another function Nf with the following property. If z is a point close enough to a root of f then the sequence defined by z0 = z , z i+1 = Nf(zi) , converges to . Moreover, in general, this convergence is very fast: the distance between and zI is at most that between and z0 divided by 22i-1 . In more recent years mathematicians have focused in questions such as what exactly "close enough'' or "in general'' mean in the description above. Can one decide whether a point z is close enough? Will the sequence starting with a specific z0 converge to a root of f ?
Image courtesy of Scott Sutherland, SUNY Stony Brook
In the accompanying picture bad initial points (those for which the sequence fails to converge to a root) for the polynomial f(z)=(z2-1)(z2+ 0.16) are colored black.
Note the complicated pattern arising from this set of bad points. The clear implication is that the question whether an initial point is `good' might well be an undecidable problem. Yet, to formally prove such an assertion requires the development of a formal computational model and a deep understanding of the geometry of decidable sets. It has been accomplished very recently.
The situation described above is in a sense very simple. In general, one needs to deal with functions in several variables and occasionally inequalities must be considered as well. Such problems are ubiquitous in optimization, where the goal is to find x maximizing (or minimizing) a function f subject to inequality constraints. Moreover, in a realistic computing environment it is important to consider the effects produced by round-off and other sources of error.
The goal of the FoCM program at MSRI is to develop a better understanding of how mathematics underpins numerical calculations. We hope to address ourselves to the four following issues and the interplay among them:
Complexity: What is the 'cost' inherent in a computational problem that no algorithm can circumvent?Optimization: How to find the best value of a function (or a functional) subject to constraints?Homotopy: How does the knowledge of the solution of a `nearby' problem assist us to compute the problem in hand?Geometric integration: How to compute approximate solutions that share qualitative properties with the true solution of the problem?All these issues share two important characteristics. Their methodology requires deep and demanding pure mathematics, while their understanding is vital to the development of a new generation of powerful and reliable algorithms in scientific computing.

**Keywords and Mathematics Subject Classification (MSC)**

**Primary Mathematics Subject Classification**No Primary AMS MSC

**Secondary Mathematics Subject Classification**No Secondary AMS MSC

August 17, 1998 - August 26, 1998 | Introductory Workshop on Foundations of Computational Mathematics and Symbolic Computation in Geometry and Analysis |

November 02, 1998 - November 06, 1998 | Complexity of Continuous and Algebraic Mathematics |