Home Contact Publications CV Thesis Talks Teaching Research Links

David Albers

Department of Biomedical Informatics, Columbia University
622 West 168th Street VC-5
New York, NY 10032

Phone: +1 608 469 3047
Office: 519

Email: david.albers(at)dbmi.columbia.edu

My research has two particular areas of focus, temporal processing and modeing of biomedical data (dynamical biomedical informatics) and dynamical systems; very brief summaries of these interests are listed below. Currently this page is being updated and portions are rather out of date; nevertheless, both a list of publications and my CV can be found at the respective hyperlinks in this sentence and at the top of this page and both are up to date.

I have formally (= held a job at) been associated with the following institutions: University of Wisconsin - Madison (physics and mathematics), the Santa Fe Institute, the Center for Computational Science and Engineering at the University of California - Davis, the Max Planck Institute for Mathematics in the Sciences, and Columbia University (Biomedical Informatics).

Areas of interest: temporal processing and modeling of electronic health record (EHR) data (dynamical biomedical informatics); dynamical systems (both abstract and computational); nonlinear time-series analysis; information theory; signal processing; learning theory; game theory; ergodic theory; global analysis; random matrix theory; mathematical finance; ; computational ecology; cellular automata; random dynamical systems; computational differential geometry; information geometry..

Dynamical biomedical informatics:

In general, there are very few mathematical models based concretely on analysis of biomedical data. There is a reason for this; biomedical data is very difficult to use and relatively little time-dependent analysis of such data exists. This poses serious problem; consider where physics or atmospheric science would be without models to test hypotheses and further our conceptual understanding. It is worth noting, however, that biomedicine does not have the advantage of having first principles type tools to ground the modeling. In any event, I am working on developing simple mathematical models based on scientific and clinical insight that help explain the trends observed in the data. This involves using many standard tools from dynamical systems, time-series analysis, signal processing, learning theory, and a large array of other data-driven fields (e.g., economics, atmospheric physics, etc).

Dynamical systems:

My work with dynamical systems consists largely of statistical studies of the qualitative behavior of high dimensional dynamical systems. In particular, I am interested in understanding the geometric mechanisms that yield persistent, chaotic dynamics. Such an understanding requires insight both into the geometric difference between high and low-entropy high-dimensional dynamical systems and into the transitions between the zero, low, and high-entropy regimes. Thus, I am focusing on several specific topics of research: the genericity of the Ruelle-Takens route to chaos; observation in numerical data and consequent implications concerning the stability conjecture of Palis and Smale; observation of characteristics of the more modern version of the stability conjecture, the Pugh-Shub conjecture; and the persistence of chaotic dynamics in relation to parameter variation, including addressing the existence of Milnor attractors and the windows conjecture of Barretto. The set of mappings I use for many of the above investigations consist of scalar, feedforward neural networks.

Learning theory:

My original interest in learning theory stems from several applied problems in dynamical systems (e.g., attractor and dynamical reconstruction), as well as working towards a geometric and dynamical understanding of learning algorithms. To this end I am studying, in a game theoretic framework, the different dynamics, information capacities, and success of different learning schemes. Examples of learning schemes under consideration are reinforcement learning, simulated annealing, neural networks, and epsilon machine learning. There are many problems that I intend to address using the aforementioned framework; a partial list includes: ecology of different agents with different learning schemes in a multi-agent dynamical system, a geometric understanding of the dynamics of these multi-agent dynamical systems, and a more fundamental understanding of how information is held and processed within various learning schemes. This approach to learning theory is addressed within a multi-agent, complex systems construction.

Here is some general info for my sake and yours: Numerical Lyapunov Exponent Calculation, Computation and Debugging resources