We use cookies in order to improve the quality and usability of the HSE website. More information about the use of cookies is available here, and the regulations on processing personal data can be found here. By continuing to use the site, you hereby confirm that you have been informed of the use of cookies by the HSE website and agree with our rules for processing personal data. You may disable cookies in your browser settings.
✖119048Moscow, Usacheva str., 6
phone/fax: +7 (495) 624-26-16
phone: +7 (495) 916-89-05
e-mail: math@hse.ru
Dean – Vladlen Timorin
Deputy Dean for Academic Progress – Igor Artamkin
Deputy Dean for Research – Evgeny Feigin
Deputy Dean – Vera Kuznetsova
The Winter School is held jointly by the NRU HSE and SPbSU as a part of the cooperation in the area of mathematics. The aim of the school is to overview the principal trends in contemporary theoretical mathematics dealing with symmetry and complexity in a broad sense. We invite senior undergaduate students of mathematics and related fields, as well as graduates of BSc programs planning to continue their studies of fundamental mathematics.
When: February 1 – 6, 2019
Where: HSE Faculty of Mathematics, Moscow, 6 Usacheva ul.
How to take part in the school: to attend the school one needs to fill out the form. The strongest applications will be chosen and their authors will be invited to take part in the school.
Deadline: if you do not need the Russian visa to attend the school, then the deadline to fill out the form is January 12, 2019, 23h 59min. If you need the visa, then the deadline is December 10, 2018, 23h 59min.
Financial support: a dormitory is provided for the school participants. All other expenses, including the transportation costs and meal, the participants pay from their own funds. The school participants will be able to have lunch in the cafeteria of the faculty, the estimated price of one lunch is 120 rubles.
Schedule of the school
List of lectures and mini-courses.
Time-frequency analysis is one of the modern branches of harmonic analysis, which studies shifts and modulations in functional and operator spaces. The first results in this area were due to Weil, Wigner and Neumann and appeared in the 1930s, when the quantum mechanics were developing rapidly. In the past 20 years interest to this area of analysis has been revived again in connection with wide application in the fields of information theory and signal analysis. Another reason for the revival is the intensive development of the wavelet theory, which is “similar” to the time-frequency analysis.
The aim of the course is to discuss the time-frequency analysis from its basis to the modern results which concern the Gabor frames and modulation spaces. For understanding of the course basic knowledge of analysis is desirable.
Notoriously, Arnold divided all mathematics into three parts ("celestial mechanics, hydrodynamics and cryptography") --- real, complex, and quaternionic mathematics. For instance, in the language of classical groups, this is expressed as series of orthogonal, unitary and symplectic groups. However, as observed by Roman Mikhailov, “dear V.I. was somewhat mistaken in this matter. In fact, he was completely wrong. Mathematics is not divided into three parts, but into four parts.” Octonionic mathematics that was not mentioned by Arnold is at least as important --- and, in any case, possesses much more elegant symmetry.
During the course we will discuss the hierarchy of objects, whose existence is related to octonions, starting with small finite simple groups and the corresponding geometric objects, up to exceptional algebras, symmetric spaces, the Monster, etc. In the main part of the course we will describe both classical and recent constructions of exceptional algebras and Lie type groups in terms of algebras, forms, combinatorial geometries and special projective varieties.
Buridan complexity of the computational problem arises because of the need to choose one of the possible equivalent solutions, for example, when the problem has some symmetry. In terms of algorithms, this complexity means that any algorithm solving the problem should have a number of conditional transfer operators. This issue rarely arises arise in a single problem, but it may arise when solving a family of problems depending on a parameter: for example, when creating an algorithm that finds a root of a polynomial equation (the parameters in this case are the coefficients of the equation). The simplest example is the impossibility to give a good approximate solution of the complex equation X2 = A by a continuous function of A, as well as the solution of the real equation X3 + AX + B = 0 by a continuous function of the real parameters A and B. Scale of this problem, in particular the essential number of discontinuities of a general solution of a polynomial equation or system of equations, is estimated in terms of geometry and topology of the corresponding discriminant set, that is, of the set of polynomials with coinciding roots. (This set has many other important applications and it will be discussed in detail). In this issue there are many questions which are easy to formulate but which are still open.
Does any theorem have a short proof? For example, to prove that a system of polynomial equations has solutions in {0,1}, it suffices to present such a solution. And what can we present to prove that a system has no solutions? In the example above, one could apply Hilbert's Nullstellensatz (however, the involved polynomials can be very complicated). What can and what cannot be done in the general case? This question is open even in the "easiest" case of propositional logic (where we have only logical variables and connectives), and the propositional case is equivalent to the equality of complexity classes NP and co-NP, and has been intensively studied since the work of S.А. Cook and R.А. Reckhow (1979), who introduced a formal notion of a propositional proof system. A proof system is a polynomial-time algorithm that verifies proofs: it accepts correct proofs of correct statements and does not accept proofs of false statements. Although the question is open in the general case, exponential lower bounds are known for a number of specific proof systems. “Cook’s program” for studying the complexity of proofs consists in obtaining new exponential lower bounds for increasingly more powerful proof systems. Concepts and methods used in this area belong to various branches of mathematics. For example, there are proof systems based on geometric principles or on the proof of the emptiness of a semi-algebraic set. In this introductory mini-course, we will formulate several proof systems and demonstrate several lower bounds. We will also discuss the connection with the general NP vs co-NP question and the existence of "proofs from The Book" - a system with the shortest possible proofs.
Substructural logics are logical systems in which some or all rules, which are classically valid, are omitted. For example, contraction (A -> A and A) becomes invalid, if we interpret A as a kind of resource, which is spent when A is used: in this situation, one could have A -> B, A -> C, but not A -> (B and C). Another rule, weakening (A -> (B -> A)) becomes invalid in so-called relevance logics (where we wish the premises of an implication to be essentially used in obtaining the goal, and disallow reasoning like "If 2x2=5, then Volga flows into Caspian Sea"). Finally, there are non-commutative logics (where "A and B" is not the same as "B and A"). In this course, we will discuss various substructural systems, their algebraic semantics and applications to natural language study.
Concentration of measure is a phenomenon in probability theory, analysis, and combinatorics when any function of a large number of variables, satisfying fairly general and not too restrictive assumptions, has to be almost constant. A classic example is the following: almost all surface of a multidimensional sphere is concentrated near the equator. In the 1970s, Vitaly Milman found an application of this fact in the local geometry of Banach spaces, by giving a new proof of the famous Dvoretsky theorem (which originally was the Grothendieck hypothesis): any centrally symmetric convex body of sufficiently large dimension has an almost circular central section of a given dimension. Since that time, the idea of concentration of measure has found many bright and effective applications, some of which we are going to discuss.
Minimal knowledge of analysis and probability theory is desirable for understanding of the course.
Questions can be addressed to Andrey Dymov by email dymov@mi-ras.ru