Self-adjoint Extensions of Schrödinger Operators with δ-magnetic Fields on Riemannian Manifolds

We consider the magnetic Schrödinger operator on a Riemannian manifold M . We assume the magnetic field is given by the sum of a regular field and the Dirac δ measures supported on a discrete set Γ in M . We give a complete characterization of the self-adjoint extensions of the minimal operator, in terms of the boundary conditions. The result is an extension of the former results by Dabrowski-Šťovíček and Exner-Šťovíček-Vytřas.


Introduction
Let (M, g) be a two-dimensional, oriented, connected complete C ∞ -Riemannian manifold, where g is the Riemannian metric on M .Let dμ be the measure induced from the Riemannian metric.If we take a local chart (U, ϕ), ϕ = (x1 , x2 ), the measure dμ is written as dμ = √ Gdx 1 dx 2 in U , where G = det(g mn ), g mn = g(∂ m , ∂ n ), and ∂ m = ∂/∂x m .We denote L 2 (M ) = L 2 (M ; dμ).The set of all 1-forms on M is denoted by Λ 1 (M ).In the coordinate neighborhood U , A ∈ Λ 1 (M ) is written as In general, the coefficients A 1 , A 2 are complexvalued.We say A is real-valued if the coefficients are real-valued.We say A is of the class C k Λ 1 (M ) if the coefficients are of the class C k (U ) for any local chart (U, ϕ).We define the class L q loc Λ 1 (M ) (1 ≤ q ≤ ∞) 1 , etc. similarly.The 2-form dA is called the magnetic field.If A ∈ L 1 loc Λ 1 (M ), dA can be defined at least in the distribution sense.In U , the magnetic field is given by Let Γ = {γ k } K k=1 be a sequence of mutually distinct points in M .The number K may be infinity, and in this case we assume additionally Γ has no accumulation points in M .Let A be a 1-form on M given by the sum of two 1-forms (A) A = A (0) + A (1) .
The part A (0) corresponds to the δ magnetic fields, that is, we assume the following.
More precisely, (1) means for any ϕ ∈ C ∞ 0 (M ) (since A (0) ∈ L 1 loc Λ 1 (M ), the left hand side is well-defined).Notice that this equation is independent of the Riemannian metric g.For the regular part A (1) and the scalar potential V , we assume the following: , and is bounded in some open neighborhood of γ k for every k = 1, . . ., K.
Using the local coordinate (x 1 , x 2 ), we define the Schrödinger operator L in each coordinate neighborhood by where (g mn ) is the inverse matrix of (g mn ).This definition is independent of the choice of local coordinates (see section 2).Define the minimal operator H min by , where the overline denotes the closure with respect to the graph norm.Define the maximal operator H max by H max = H * min .Then we can show that where L is a differential operator on D (M \ Γ).We assume The operator H min is bounded from below.
In the case M is the flat Euclidean plane, it is well-known that the operator H min is not essentially self-adjoint and the structure of the self-adjoint extensions of H min can be determined via the celebrated Krein-Von Neumann theory of self-adjoint extensions (see e.g.Reed-Simon [13]).In the textbook by Albeverio et al. [3], the case A (0) = A (1) = 0 and V = 0 (but Γ = ∅) is exhaustively studied.Adami-Teta [1] and Dabrowski-Šťovíček [7] study the case K = 1, α 1 ∈ Z, A (1) = 0, and V = 0. Exner-Šťovíček-Vytřas [8] study the case K = 1, α 1 ∈ Z, dA (1) = Bdx 1 ∧ dx 2 for some non-zero constant B (the constant magnetic field), and V = 0.Moreover, Lisovyy [11] studies the case M is the Poincaré disk, g is the Poincaré metric, V = 0 and dA = Bω g +2παδ 0 , where B is a non-zero constant and ω g is the surface form induced from the Poincaré metric g.
In all the results above, they first determine the deficiency subspaces Ker(H max ∓ i) and apply the Krein-Von Neumann theory.This method cannot be applied in the case K ≥ 2 and α k ∈ Z, however, this case (and A (1) is the constant field, V = 0) on the flat Euclidean plane is studied by the author [12], and the structure of the self-adjoint extensions is determined.Our main purpose in this paper is to generalize the result in [12] on general complete Riemannian manifolds and for more general A and V .
Our first result is about the deficiency indices n ± (H min ) = dim Ker(H max ∓ i).
Next, we shall give a complete characterization of the self-adjoint extensions of H min .To this purpose, we introduce some nice coordinates around singularities and some auxiliary functions.For simplicity, we assume K = #Γ is finite for a while. For where δ mn is the Kronecker delta.Condition (2) is satisfied, for example, if we take the normal coordinate 2 as (x 1 , x 2 ).
Let β k be the fractional part of α k , that is, where A (1) = A (1) 2 dx 2 , x 0 is some point in U k \ {0}, and the path of the line integral x x0 lies in U k \ {0}.Notice that the value of the line integral is independent of the choice of paths modulo 2πZ, by the Stokes theorem and the assumption and , where L is the operator L corresponding to the vector potential Ã and the scalar potential V .Let K 1 , K 2 be the numbers in Theorem 1.1.In the sequel, we rearrange the index k so that 0 < β k < 1 for 1 ≤ k ≤ K 1 .As we prove later, the 2 The coordinate defined by the local inverse map of the exponential map from the tangent space at γ k to M . 3More precisely, the 1-form A (1) − A (1) (0) is defined as (A where c k 1 , . . ., c k 6 are constants and ξ is a regular function in the sense ξ ∈ D(H min ).Define Now our theorem is stated as follows.
Theorem 1.2 Assume (A), (A0), (A1), (V), (SB) and Then, the operator H X defined by (ii) For any self-adjoint extension H of H min , there exists some matrix X satisfying ( 7) and H = H X .
We can consider the case K = ∞, but some technical assumptions are necessary.We shall argue this case in section 5.
Thus we can characterize the self-adjoint extensions in terms of the boundary conditions.We can easily prove that the Friedrichs extension corresponds to the case X 1 = O, X 2 = Id.In the case M = R 2 and K = 1, similar results are obtained in [7] and [8], and our theorem is a generalization of their results.As stated in their paper, the choice of matrices X is of course not unique: there are infinitely many matrices X giving same Ran X.
The difficulty in the proof is that we cannot determine the deficiency subspaces explicitly.To overcome this difficulty, we describe the condition of the self-adjointness only using the quotient subspace D(H max )/D(H min ).This quotient subspace is essentially the same object as the sum of deficiency subspaces, but much easily tractable than the deficiency subspaces themselves.This idea is also used in [4] or [12].
We note that recently self-adjoint extensions of the Schrödinger operators on R 2 with δ magnetic fields are studied from the viewpoint of the hidden supersymmetric structure; see Correa et al. [5,6].
The rest of the paper is organized as follows.In section 2, we review basic notations and facts from the differential geometry and the theory of selfadjoint extensions.In section 3, we shall prove the structure of the self-adjoint extensions depends only on the singular part of the vector potentials.In section 4, we shall prove the main theorems.In section 5, we shall consider the case K = ∞ and give a complete characterization of the self-adjoint extensions, under some homogeneity conditions.

Formulas in differential geometry
We quote some formulas used in Shubin [14] for the convenience of the readers.Take a local chart (U, ϕ), and let (g mn ) be the inverse matrix of (g mn ).For α, β ∈ Λ 1 p (M ) (the cotangent space at p), we define the scalar product This definition is independent of the choice of local coordinates.Actually, operator d * is characterized by the following relation: Let A be a 1-form satisfying our assumptions.For a function f , we define a 1-form d A f by where d is the exterior derivative, and i = √ −1.For a 1-form ω, we define Then we obtain a representation of our Schrödinger operator L independent of local coordinates:

For operator d *
A , the following Leibniz formulas hold: for an appropriate function f and 1-form ω, we have Proof.According to [14, (5.3)], 4 we have Then the conclusion follows from the above equality, and assumption V is bounded. 2

Theory of self-adjoint extensions
We quote some notation from the textbook [13].Let H be a separable Hilbert space and denote its inner product by (•, •), and norm by • .All the linear operators in this subsection are on the Hilbert space H.For a linear operator X, D(X) denotes the domain of definition of X, X the closure of X, X * the adjoint operator of X.For a linear operator X, the graph inner product of X is defined by (x, y) X = (Xx, Xy) + (x, y) for x, y ∈ D(X), and the graph norm by x X = (x, x) We introduce some equivalent for the sum of the deficiency subspaces, which is also introduced in [4] or [12].Let X be a closed, densely defined symmetric operator.Let D = D(X * )/D(X), where the right hand side denotes the quotient space.The space D is a Hilbert space equipped with the norm where x ∈ D(X * ), [x] = x + D(X) denotes the equivalence class of x in the quotient space D(X * )/D(X), and Q denotes the orthogonal projection onto the orthogonal complement of The value [u, v] D is independent of the choice of the representatives x, y.Let P be the canonical projection from D(X * ) to D. For a closed subspace V of D, we define a closed linear operator X V by We also define Then the following proposition immediately follows from the definition of the self-adjointness.Proposition 2.2 1.For a closed subspace V of D, the operator X V is a self-adjoint extension of X if and only if 2. For any self-adjoint extension X of X, there exists a closed subspace V of D such that X V = X.
In terms of the above notations, the Krein-Von Neumann theory can be rephrased as follows.
Proposition 2.3 Let N ± = Ker(X * ∓ i) the deficiency subspaces of X, n ± = dim N ± the deficiency indices of X.Then, the following holds.
(i) The projection operator P gives a Hilbert space isomorphism from the direct sum N + ⊕ N − to D. In particular, dim D = n + + n − .(ii) There exists a one-to-one correspondence between the closed subspaces V of D satisfying (11) and the unitary operators U from H + to H − , given by This proposition says the space D can play the same role as the sum of deficiency subspaces in the theory of self-adjoint extensions.Particularly when N ± is difficult to determine explicitly (as in our case), the space D is more tractable, since the element of this space has ambiguity by D(X).Actually, in the next section we shall see that the structure of D for our Schrödinger operator H min and the form [•, •] D is determined only from the singular part A (0) of the vector potential.

Division to the local potential
), the local coordinate introduced in section 1.Let Ã be the 1-form given by ( 4).Take a positive number k so small that the closed disc {r ≤ where Ĝ = det(ĝ mn ), and (ĝ mn ) is the inverse matrix of (ĝ mn ).Define a linear operator L k,min on where L k is regarded as a differential operator on where the function ψ k is given by (3).Define a map We also define a map S from In the sequel, we sometimes write [f, g] D = [[f ], [g]] D etc. for simplicity of notations.Lemma 3.1 1. Assume K < ∞.Then, the maps S, T defined above are well-defined and mutually inverse.Moreover, we have for any Then the map S is well-defined and injective.
Proof.(i) We divide the proof into three steps.
Step 1.The map is well-defined and continuous.
. By ( 5) and the Leibniz rule (9), we have The first term and the third in the parenthesis of the right hand side are in L 2 (R 2 ; dμ k ) and continuous with respect to • Hmax .Moreover, we can prove the second term is also in L 2 and continuous with respect to • Hmax by using (10). 2 Step 2. Let f ∈ D(H min ).Then, we have By definition, there exists a sequence Step 1 and 2 imply the map T is well-defined.We can similarly prove that the map S is also welldefined.
Step 3. The operator ST is the identity map on D. Proof.By definition, we have So it suffices to prove that g = ψf ∈ D(H min ). 5 When K = ∞, we define the map S for the elements of Let (r, θ) be the radial coordinate in U k and put . Let L 0 , H 0,min and H 0,max be the operators corresponding to the potentials ξ 4 A and ξ 4 V .These potentials have no singularities, so we have H 0,min = H 0,max by [14].Since Lg = L 0 g ∈ L 2 , we have g ∈ D(H 0,max ) = D(H 0,min ).Thus we can take a sequence {g n } such that We can prove T S = I similarly.Then (12) follows from ( 5) and the equality to D similarly, and prove T (n) S (n) = Id.This implies the map S is well-defined and injective.2

Analysis of operators on R 2
We shall analyze the operator L k (or L k,min , L k,max ) defined in the previous subsection.For simplicity of notation, we omitˆand˜in the definition of L k in the sequel.Then our assumptions are the following: V is bounded, real-valued, 6. g mn (0) = δ mn , ∂ j g mn (0) = 0, and g mn = δ mn for r ≥ 2 k .We shall show that g mn , A (1) and V have nothing to do with the structure of the self-adjoint extensions.To this purpose, define a differential operator M k on R 2 by Define a linear operator M k,min on L 2 (R 2 ; dx 1 dx 2 ) by , and k,max , and E (0) k , by replacing A n by A (0)  n in the above definition.
The operator k is already studied in [1] and [7].Here we quote their results and calculate the form Then, the deficiency indices n ± (M 6 we have Then, the deficiency indices n ± (M k , and Proof.(i) The first statement follows from the result in [7] or [1].For the calculation of [u, v] E (0) k , we use some notation in vector analysis.We use the gradient vector ∇ = t (∂ 1 , ∂ 2 ), and identify a 1-form A with the component vector t (A 1 , A 2 ).The dot • denotes the Euclidean inner product.Then we have where n = (cos θ, sin θ), and the line integral is taken counterclockwise.We used the Green formula and the fact n • A (0) = 0. Then we can easily prove the second statement by using (13).
(ii) The first part of the statement follows from the results in [3].The second statement can be justified by using (13). 2 Next, we prove that the regular part A (1) does not affect the structure of E k and the corresponding form.
Before the proof, we prepare a perturbative lemma, which is an immediate corollary of [10, Theorem IV.5.22].

Lemma 3.4 Let H be a separable Hilbert space and
• its norm.Let X, Y be densely defined symmetric operators on H. Assume D(X) ⊂ D(Y ) and there exist positive constants C, δ with 0 < δ < 1 and Y u ≤ δ Xu + C u for every u ∈ D(X).Then, we have D(X + Y ) = D(X) and n ± (X + Y ) = n ± (X), where the overline denotes the operator closure.
Proof of Proposition 3.3 We prove only statement (i).Statement (ii) can be proved similarly.
Then we have n k,min ), so the functions {f j k } (j = 1, 2, 4, 5) do not belong to D(M k,min ).And we can prove M k f j k ∈ L 2 (R 2 ) by using ( 14) and the fact |A (1)  (1) )uv rdθ in a similar way as in (13).Thus the value [f m k , f n k ] E k is not affected by A (1) , since |A (1) ). 2 Next we shall consider the non-flat case.We shall show that metric g also does not affect the structure of D k and the corresponding form.Since V is bounded, we can assume V = 0.In the sequel, we use the following notation: where D is the column vector t (D 1 , D 2 ), D j = −i∂ j , A is identified with the component vector t (A 1 , A 2 ), and g −1 is the inverse matrix of g = (g mn ).
We shall prepare some elliptic a priori estimate.
Lemma 3.6 Let m, n ∈ {1, 2}.Then, there exist C m > 0 and C mn > 0 such that for every u ∈ C ∞ 0 (R 2 \ {0}) and every > 0, where The difficulty is the singularity of our vector potential A at the origin.We can overcome this difficulty by using some commutator technique.
In a similar way as in (13), we have where Φ(u) * is the row-vector t Φ(u) and D is the matrix given by (6).Let X = t (X 1 , X 2 ) be the matrix satisfying (7).Then we have Thus we have (11), and therefore H X is self-adjoint.Conversely, for a given self-adjoint extension H of H min , we can construct a (4K 1 + 2K 2 ) × (2K 1 + K 2 )matrix X by arranging the coefficients of an arbitrary basis of V = P D(H) with respect to the basis

Infinite singularities
Let us consider the case K = ∞, and extend Theorem 1.2.Even in this case, for u ∈ D(H max ) and for each k, we can define the asymptotic coefficients c k j at γ k .However, the sequence Φ j (u) is an infinite sequence.We shall find appropriate assumptions which make these infinite sequences square summable.
In the sequel, U k , β k , g mn are those introduced in section 1.However, we may replace ψ k defined by (3) more appropriate one satisfying ( 4), if such one exists.For simplicity, we assume V = 0. (U) (i) There exists 0 > 0, independent of k, such that U k = {r < 0 } for every k.(ii) There exist β − , β + such that 0 < β − ≤ β k ≤ β + < 1 or β k = 0, for every k.(iii) There exists C 1 > 0 independent of k such that g mn satisfies (2) and and Thus we assume some homogeneity for g, A (0) , and A (1) .Since the open sets {U k } ∞ k=1 are required to be disjoint, assumption (i) says the points of Γ are uniformly separated in some sense.Assumption (ii) seems a little strange, but we need this assumption if we want to make the boundary value Φ(u) square summable.7 Assumption (iii) binds the curvature of M , and (iv) the intensity of the magnetic field.In [12], the author considers a similar assumption when M is the flat Euclidean plane and dA (1) is a constant magnetic field.
In the sequel, we use the notation and define its inner product by usual l 2 -inner product.Let Proposition 5.1 Assume (A), (A0), (A1), (SB), (U), V = 0, and K = ∞.Then, the following linear map is a well-defined homeomorphism.Moreover, where D is a bounded operator on H defined by (6).
Once this proposition is established, our theorem can be proved similarly as in the proof of Theorem 1.2.So we omit the proof.
Theorem 5.2 Assume the same conditions as in Proposition 5.1.Then, the statements of Theorem 1.2 hold with the following changes: X 1 , X 2 are bounded operators on H, and condition ( 7) is replaced by the condition Ran X = Ker X * D, where D is the bounded operator on H ⊕ H defined in Proposition 5.1.
We conclude this paper by proving Proposition 5.1.
Proof of Proposition 5.1.We divide the proof into two steps.
Step 1.The map is continuous, bijective and its inverse is also continuous.Proof.By our assumption (U) and the calculation in section 3, we can prove there exists C > 0 independent of k such that Summing up these equalities with respect to k, we conclude the map is continuous.Then the well-definedness of the map (23) can be proved similarly as in section 3. Since D is identified with the closed subspace D(H min ) ⊥ of D(H max ) and the projection from D(L k,max ) to D k is continuous, we conclude the map (23) is continuous.Moreover, we can prove the inverse map is also well-defined and continuous, so we have the conclusion. 2 Step 2. There exists C > 1 independent of k such that for every [u] ∈ D k , where c k = (c k 1 , c k 2 , c k 4 , c k 5 ) for 0 < β k < 1, c k = (c k 3 , c k 6 ) for β k = 0, and c k j are asymptotic coefficients of u defined in section 1. Proof.We only consider the case 0 < β k < 1.Consider the following formula for c k which can be verified by substituting all the basis functions into u.By choosing the representative u ∈ D(L k,min ) ⊥ (so u L k,max = [u] D k ) and using the Schwarz inequality, we have The fraction is bounded uniformly w.r.t.k, by our assumption (ii) of (U).Moreover, we can prove f j k L k,max is also uniformly bounded, by (U) and the calculations in section 3 (first decompose L k as in section 3, and estimate all terms).Thus we have

∞ k=1 D
k having only finite nonzero components [f k ].So there is no difficulty in the definition of S.
) Let K = ∞.For any positive integer n, we can define T (n) from D to n k=1 D k , and S (n) from n k=1 D k

Proposition 3 . 3
All the statements of Proposition 3.2 hold even if we replace M (0) k,min by M k,min , and

Proposition 3 . 5
All the statements of Proposition 3.2 hold even if we replace M (0) k,min by L k,min and E (0) k by D k .

2 4
Thus we have the conclusion.Proof of main theorems Proof of Theorem 1.1 Since H min is semibounded, we have n + (H min ) = n − (H min ) = dim D/2.By Lemma 3.1 and Proposition 3.5, we have for K < ∞ dim D = K k=1 dim D k = 4K 1 + 2K 2 , and for K = ∞ dim D ≥ ∞ k=1 dim D k = ∞.Thus we have the conclusion. 2 Proof of Theorem 1.2 By Lemma 3.1 and Proposition 3.5, we have for u, v ∈ D(H max ) [u, v] D = 4πΦ(u) * O −D D O Φ(v),

2 L
|c k j | ≤ C [u] D k .forj = 1.The case j = 2, 4, 5 can be treated simik,max ) 1/2 , and the sum in the right hand side is uniformly bounded.Thus the conclusion holds. 2 By Step 1 and 2, we have proved the map (23) is well-defined and homeomorphism.Equation (24) is confirmed by substituting each f j k as u or v. 2