1. Trang chủ
  2. » Khoa Học Tự Nhiên

127 2 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Introduction to Non-Linear Algebra
Tác giả V. Dolotin, A. Morozov
Trường học ITEP
Thể loại thesis
Năm xuất bản 2006
Thành phố Moscow
Định dạng
Số trang 127
Dung lượng 1,61 MB

Cấu trúc

  • 1.1 Formulation of the problem (0)
  • 1.2 Comparison of linear and non-linear algebra (5)
  • 1.3 Quantities, associated with tensors of different types (9)
    • 1.3.1 A word of caution (9)
    • 1.3.2 Tensors (9)
    • 1.3.3 Tensor algebra (10)
    • 1.3.4 Solutions to poly-linear and non-linear equations (13)
  • 2.1 Linear algebra (particular case of s = 1) (16)
    • 2.1.1 Homogeneous equations (16)
    • 2.1.2 Non-homogeneous equations (16)
  • 2.2 Non-linear equations (17)
    • 2.2.1 Homogeneous non-linear equations (17)
    • 2.2.2 Solution of systems of non-homogeneous equations: generalized Craemer rule (19)
  • 3.1 Summary of resultant theory (20)
    • 3.1.1 Tensors, possessing a resultant: generalization of square matrices (20)
    • 3.1.2 Definition of the resultant: generalization of condition det A = 0 for solvability of system of homogeneous linear (21)
    • 3.1.3 Degree of the resultant: generalization of d n|1 = deg A (det A) = n for matrices (21)
    • 3.1.4 Multiplicativity w.r.t. composition: generalization of det AB = det A det B for determinants 21 (21)
    • 3.1.5 Resultant for diagonal maps: generalization of det (22)
    • 3.1.6 Resultant for matrix-like maps: a more interesting generalization of det (22)
    • 3.1.7 Additive decomposition: generalization of det A = P σ ( − ) σ Q (23)
    • 3.1.8 Evaluation of resultants (24)
  • 3.2 Iterated resultants and solvability of systems of non-linear equations (24)
    • 3.2.1 Definition of iterated resultant ˜ R n|s { A } (24)
    • 3.2.2 Linear equations (24)
    • 3.2.3 On the origin of extra factors in ˜ R (26)
    • 3.2.4 Quadratic equations (27)
    • 3.2.5 An example of cubic equation (27)
    • 3.2.6 Iterated resultant depends on symplicial structure (28)
  • 3.3 Resultants and Koszul complexes [4]-[8] (28)
    • 3.3.1 Koszul complex. I. Definitions (28)
    • 3.3.2 Linear maps (the case of s 1 = . . . = s n = 1) (29)
    • 3.3.3 A pair of polynomials (the case of n = 2) (30)
    • 3.3.4 A triple of polynomials (the case of n = 3) (30)
    • 3.3.5 Koszul complex. II. Explicit expression for determinant of exact complex (31)
    • 3.3.6 Koszul complex. III. Bicomplex structure (34)
    • 3.3.7 Koszul complex. IV. Formulation through ǫ-tensors (34)
    • 3.3.8 Not only Koszul and not only complexes (36)
  • 3.4 Resultants and diagram representation of tensor algebra (38)
    • 3.4.1 Tensor algebras T (A) and T (T ), generated by A i I and T [17] (38)
    • 3.4.2 Operators (38)
    • 3.4.3 Rectangular tensors and linear maps (39)
    • 3.4.4 Generalized Vieta formula for solutions of non-homogeneous equations (40)
    • 3.4.5 Coinciding solutions of non-homogeneous equations: generalized discriminantal varieties (45)
  • 4.1 Definitions (46)
    • 4.1.1 Tensors and polylinear forms (46)
    • 4.1.2 Discriminantal tensors (46)
    • 4.1.3 Degree of discriminant (46)
    • 4.1.4 Discriminant as an Q r k=1 SL(n k ) invariant (47)
    • 4.1.5 Diagram technique for the Q r k=1 SL(n k ) invariants (48)
    • 4.1.6 Symmetric, diagonal and other specific tensors (48)
    • 4.1.7 Invariants from group averages (49)
    • 4.1.8 Relation to resultants (49)
  • 4.2 Discrminants and resultants: Degeneracy condition (50)
    • 4.2.1 Direct solution to discriminantal constraints (50)
    • 4.2.2 Degeneracy condition in terms of det ˆ T (50)
    • 4.2.3 Constraint on P [z] (51)
    • 4.2.4 Example (51)
    • 4.2.5 Degeneracy of the product (52)
    • 4.2.6 An example of consistency between (4.17) and (4.19) (52)
  • 4.3 Discriminants and complexes (52)
    • 4.3.1 Koshul complexes, associated with poly-linear and symmetric functions (52)
    • 4.3.2 Reductions of Koshul complex for poly-linear tensor (53)
    • 4.3.3 Reduced complex for generic bilinear n × n tensor: discriminant is determinant of the square matrix 55 (55)
    • 4.3.4 Complex for generic symmetric discriminant (56)
  • 4.4 Other representations (57)
    • 4.4.1 Iterated discriminant (57)
    • 4.4.2 Discriminant through paths (58)
    • 4.4.3 Discriminants from diagrams (58)
  • 5.1 The case of rank r = 1 (vectors) (59)
  • 5.2 The case of rank r = 2 (matrices) (60)
  • 5.3 The 2 × 2 × 2 case (Cayley hyperdeterminant [4]) (63)
  • 5.4 Symmetric hypercubic tensors 2 ×r and polynomials of a single variable (66)
    • 5.4.1 Generalities (66)
    • 5.4.2 The n | r = 2 | 2 case (67)
    • 5.4.3 The n | r = 2 | 3 case (70)
    • 5.4.4 The n | r = 2 | 4 case (71)
  • 5.5 Functional integral (1.7) and its analogues in the n = 2 case (74)
    • 5.5.1 Direct evaluation of Z(T ) (74)
    • 5.5.2 Gaussian integrations: specifics of cases n = 2 and r = 2 (78)
    • 5.5.3 Alternative partition functions (79)
    • 5.5.4 Pure tensor-algebra (combinatorial) partition functions (82)
  • 5.6 Tensorial exponent (86)
    • 5.6.1 Oriented contraction (87)
    • 5.6.2 Generating operation (”exponent”) (87)
  • 5.7 Beyond n = 2 (87)
  • 6.1 From linear to non-linear case (87)
  • 6.2 Eigenstate (fixed point) problem and characteristic equation (88)
    • 6.2.1 Generalities (88)
    • 6.2.2 Number of eigenvectors c n|s as compared to the dimension M n|s of the space of symmetric functions 89 (89)
    • 6.2.3 Decomposition (6.8) of characteristic equation: example of diagonal map (90)
    • 6.2.4 Decomposition (6.8) of characteristic equation: non-diagonal example for n | s = 2 | 2 (93)
    • 6.2.5 Numerical examples of decomposition (6.8) for n > 2 (94)
  • 6.3 Eigenvalue representation of non-linear map (94)
    • 6.3.1 Generalities (94)
    • 6.3.2 Eigenvalue representation of Plukker coordinates (95)
    • 6.3.3 Examples for diagonal maps (95)
    • 6.3.4 The map f (x) = x 2 + c (97)
    • 6.3.5 Map from its eigenvectors: the case of n | s = 2 | 2 (98)
    • 6.3.6 Appropriately normalized eigenvectors and elimination of Λ-parameters (99)
  • 6.4 Eigenvector problem and unit operators (101)
  • 7.1 Relation between R n|s 2 (λ s+1 | A ◦2 ) and R n|s (λ | A) (102)
  • 7.2 Unit maps and exponential of maps: non-linear counterpart of algebra ↔ group relation (104)
  • 7.3 Examples of exponential maps (105)
    • 7.3.1 Exponential maps for n | s = 2 | 2 (105)
    • 7.3.2 Examples of exponential maps for 2 | s (106)
    • 7.3.3 Examples of exponential maps for n | s = 3 | 2 (107)
  • 8.1 Solving equations (108)
    • 8.1.1 Craemer rule (108)
    • 8.1.2 Number of solutions (108)
    • 8.1.3 Index of projective map (110)
  • 8.2 Dynamical systems theory (111)
    • 8.2.1 Bifurcations of maps, Julia and Mandelbrot sets (111)
    • 8.2.2 The universal Mandelbrot set (112)
    • 8.2.3 Relation between discrete and continuous dynamics: iterated maps, RG-like equations and effective actions112 (112)
  • 8.3 Jacobian problem (116)
  • 8.4 Taking integrals (116)
    • 8.4.1 Basic example: matrix case, n | r = n | 2 (117)
    • 8.4.2 Basic example: polynomial case, n | r = 2 | r (117)
    • 8.4.3 Integrals of polylinear forms (117)
    • 8.4.4 Multiplicativity of integral discriminants (118)
    • 8.4.5 Cayley 2 × 2 × 2 hyperdeterminant as an example of coincidence between integral and algebraic discriminants119 (119)
  • 8.5 Differential equations and functional integrals (119)

Nội dung

Comparison of linear and non-linear algebra

Linear algebra [1] is the theory of matrices (tensors of rank 2), non-linear algebra [7]-[12] is the theory of generic tensors.

The four main chapters oflinear algebra,

• Solutions of systems of linear equations;

•Theory of linear operators (linear maps, symmetries of linear equations), their eigenspaces and Jordan matrices;

• Linear maps between different linear spaces (theory of rectangular matrices, Plukker relations etc);

• Theory of quadratic and bilinear functions, symmetric and antisymmetric; possess straightforward generalizations tonon-linear algebra, as shown in comparative table below.

Non-linear algebra is divided into two main branches: the theories of solutions to non-linear and poly-linear equations The primary special function of linear algebra, the determinant, has been generalized into resultants and discriminants Notably, discriminants can be expressed in terms of resultants and vice versa Immediate applications of these concepts can be found in the theories of SL(N) invariants, homogeneous integrals, and algebraic τ-functions.

The classification of tensors based on their covariant and contravariant indices parallels the theories of linear operators and quadratic functions This distinction involves transformations using operators U ⊗r 1 ⊗ U −1 ⊗(r−r 1 ) with varying r1 Similar to linear algebra, the orbits of non-linear U-transformations on tensor spaces are heavily influenced by r1, allowing for the examination of canonical forms, stability subgroups, and their bifurcations The exploration of eigenvectors and Jordan cells evolves into a comprehensive theory of non-linear transformation orbits and the Universal Mandelbrot set Even in the simplest single-variable scenarios, this area presents a wealth of complexity and significant physical applications.

Linear algebra Non-linear Algebra

SYSTEMS of linear equations SYSTEMS of non-linear equations: and theirDETERMINANTS: and theirRESULTANTS:

Pn j=1A j i zj= 0, i= 1, , n Ai(z) =Pn j 1 , ,j si =1A j i 1 j si zj 1 zj si = 0, i= 1, , n

Solvability condition: Solvability condition: det1≤i,j≤nA j i

= 0 R s 1 , ,s n{A1, , An}= 0 or Rn|s{A1, , An}= 0 if all s1= .=sn=s ds 1 , ,s n≡deg A R s 1 , ,s n =Pn i=1

Zj=Pn k=1Aˇ k j Ck where Pn j=1A j i Aˇ k j =δ k i detA

Dimension of solutions space for homogeneous equation:

(the number of independent choices of{Ck}) dim n|1 = corank{A}, typically dim n|1 = 1 typically dim n|s = 1

Pn j=1A j i zj=ai, Pn j 1 , ,j s =1A j i 1 j s zj 1 zj s =P

Solution (Craemer rule): Solution (generalized Craemer rule):

Zk is defined from a linear equation: Zk is defined from a single algebraic equation:

= 0 Rn|s{A (k) (Zk)}= 0, where A (k) (Zk) is obtained by substitutions zk−→zkZk anda (s ′ ) →z k s−s ′ a (s ′ )

Zk is expressed through principal minors the set of solutionsZk satisfies Vieta formula

ZkdetA= ˇA l k al and its further generalizations, s.3.4.4

# of solutions of non-homogeneous equation:

OPERATORS made from matrices: Linear maps OPERATORS made from tensors: Poly-linear maps (symmetries of systems of linear equations): (symmetries of systems of non-linear equations): z→V z, z→V(z) of degree d,

Multiplicativity of determinants w.r.t compositions Multiplicativity of resultants w.r.t compositions oflinearmaps: ofnon-linearhomogeneous maps for linear transformsA→U Aandz→V z for transformsA→U(A) of degreed ′ andz→V(z) of degreed det

= detUdetAdetV Rn|sdd ′ {U A(V z)}=Rn|d ′ {U} (sd ′ ) n−1 Rn|s{A} d n d ′n−1 Rn|d{V} (sd) n

Eigenvectors (invariant subspaces) of linear transformA Orbits (invariant sets) of non-linear homogeneous transformA

Orbits of transformations with U =V −1 Orbits of transformations withU =V −1 in the space of linearoperatorsA: in the space ofnon-linearoperatorsA: generic orbit (diagonalizableA’s) non-singularA’s and and

A’s with coincident eigenvalues, A’s with coincident orbits, reducible to Jordan form belonging to the Universal Mandelbrot set [2]

Invariance subgroup ofU =V −1 : Invariance subgroup ofU =V −1 a product of Abelian groups

A j i ejà=λàeià A j i 1 j s ej 1 à ej s à= Λàeià orAi(~eà) =λà(~eà)eià =I i λ à (~eà)

A j i =Pn à=1eiàλàe j à , eiàe i ν =δàν A α i =PM n|s à=1 eiàΛàE à α =PM n|s à=1 eˇiàEˇ à α , Λà=λ(eà) {E à α }= (max.minor of E) −1 ,Eαà=ej 1 à ej s à, α= (j1, , js)

RECTANGULAR m×n matrices (m < n): ”RECTANGULAR” tensors of sizen1≤ .≤nr: discriminantal condition: rank(T) 1 \), it is required that minors of smaller sizes, up to \( n + 1 - k \), also equal zero.

The second, equally well known, example of thesamephenomenon is degeneration ofnon-linearmaps, but only of two homogeneous (or one projective) variables: C 2 →C 2 : (x, y)→

In the context of two homogeneous polynomials, Ps 1(x, y) and Ps 2(x, y), with degrees s1 and s2, the mapping typically results in an s1s2-fold covering of C², indicating an index of s1s2 However, when viewed as a map from P¹ to P¹, the index is reduced to max(s1, s2) If the polynomials, treated as functions of the projective variable ξ = x/y, share a common root, this index is further decreased by one The condition for this coincidence is determined by the vanishing of the resultant, Resξ.

In linear maps, the vanishing of the resultant indicates a decrease in the dimension of the image; however, in non-linear maps, the image may remain a surjection while the number of branches of the inverse map decreases This reduction, referred to as the index, serves as a non-linear counterpart to kernel dimensions in linear maps The index is essential for constructing non-linear complexes and cohomologies, and insights from ordinary linear complexes can also be beneficial in non-linear studies.

In this paper, we define the ordinary determinant and ordinary resultant as specific instances of a generic quantity that quantifies the degeneration of arbitrary maps, referred to as the resultant The discriminant serves as its counterpart for poly-linear functions, where it aligns with the ordinary determinant in linear cases and represents the condition for coinciding roots in polynomial cases.

In this article, we explore the relationship between homogeneous and projective coordinates, where homogeneous coordinates ~z = {zi, i= 1, , n} define a vector space Vn The dual space Vn∗ consists of all linear functions of n variables We also discuss how projectivization operates on Vn−0, which excludes zero, by factoring it according to the common rescalings of all n coordinates, resulting in Pn−1.

Projectivization is a clearly defined process for homogeneous polynomials and equations, where all terms share the same degree in the variable ~z To transform any polynomial equation into a homogeneous form, one can introduce an auxiliary homogeneous variable and adjust the equation accordingly, such as converting ax+b into ax+by or transforming ax² + bx + c into ax² + bxy + cy² This method applies to the space of arbitrary polynomials of degree ≤ s in n−1 variables.

=n the space of homogeneous polynomials of degreesof n variableso

A system of n−1 non-homogeneous equations with n−1 variables can be transformed into a system of n−1 homogeneous equations with n variables, which has a continuous one-parameter set of solutions influenced by an auxiliary variable When this variable is fixed, the intersection typically yields a discrete set of points representing solutions to the original system Notably, special cases arise when the one-parameter set is tangent to the intersection point on the section.

Projective coordinates are defined in specific charts, such as ξk=zk/zn for k=1 to n−1 A set of linear equations, represented as Pn j=1A j i zj=0, establishes a mapping from projective space P n−1 to P n−1, transforming zi into Pn j=1A j i zj In these charts, this mapping appears as a rational function ξi →.

The equations in question exhibit a zero on the right-hand side, which does not correspond to a point in P n−1 For a non-degenerate matrix A, the equation does not yield non-vanishing solutions, confirming that no points in P n−1 are mapped to zero; thus, P n−1 is indeed mapped onto itself This mapping is onto since a non-degenerate A is invertible, ensuring that every point in the target P n−1 has a corresponding pre-image In cases where A is degenerate, with det A = 0, the mapping still exists but has a codimension one image in P n−1, where the apparent zero must be correctly interpreted as belonging to this reduced image For instance, in the case of n = 2, we have the relationship x y.

In projective geometry, the mapping of the form ax + by = 0 and cx + dy = 0 can become degenerate when the product ad equals zero, leading to a constant ratio ξ → a/c, which maps the entire projective space P1 to a single point a/c in the target space P1 This phenomenon also applies to the point x/y = ξ = -c/d = -a/b, representing a non-trivial solution of the system Consequently, a l'Hôpital-like rule enables the treatment of homogeneous equations within projective spaces This principle extends beyond linear equations to encompass generic non-linear and polynomial equations, with a unified approach to both homogeneous and projective formulations throughout the theory.

Tensor Relevant quantities Typical results

Generic rank-rrectangular tensor Discriminant (Cayley hyperdeterminant) •deg T (D) – see s.4.1.3 of the typen1× .×nr: D n 1 × ×n r(T) •itedisc (iteration inr):

T i 1 i r , 1≤ik≤nk D n 1 × ×n r ×n r+1(T i 1 i r i r+1 ) or a function ofr nk-component vectors D(T) = 0 is consistency condition =irf

T i 1 i r x1i 1 xri r (existence of solution with all~xk 6=~0) D n 1 × ×n r(T i 1 i r i r+1 ti r+1)

CoefficientsTi 1 i r are placed at points for the system ∂~ ∂T x k = 0 (i.e ∂T ∂x (x) kik = 0) • Additive decomposition [9] of then1× .×nrhyperparallepiped of

Totally hypercubic symmetric rank-rtensor, Symmetric discriminant i.e.allnk =nand for any permutationP ∈σn D n|r (S) =irf

S i 1 i r =S i P(1) i P(r) (an irreducible factor in the full discriminant, •deg S D n|r (S) =n(r−1) n−1 or a function (r-form) of a single vector~x emerging forhypercube and total symmetry);

1≤k≤rS i 1 i r xi 1 xi r D(S) = 0 is consistency condition for ∂S ∂~ x = 0

Totally antisymmetric tensor (allnk=n) HyperPfaffian

(C) for any permutationP ∈σn for some powerν

Homogeneous mapVn→Vn of degrees ResultantRn|s{A~}=irf

The degree of the tensor \( R_n|s \) is defined as \( \text{deg}(R_n|s) = n s^{n-1} \), where the tensor has a rank of \( r = s + 1 \) and the consistency condition states that \( R = 0 \) In the context of iterations, the system exhibits total symmetry in the last indices, allowing for the existence of non-vanishing solutions \( \mathbf{z} \neq 0 \) for the homogeneous system \( \tilde{A} \mathbf{z} = 0 \) with respect to \( R^{n+1}|s \{ A_1(\mathbf{z}), \ldots, A_{n+1}(\mathbf{z}) \} \) and \( A^\alpha_i = A^j_{i1 \ldots j_s} \).

Linear algebra (particular case of s = 1)

Homogeneous equations

In general position the system of nhomogeneous equations fornvariables,

A j i zj= 0 (2.2) has a single solution: allzj= 0 Non-vanishing solution exists only if then 2 coefficientsA j i satisfyoneconstraint: n×ndetA j i = 0, (2.3) i.e the certain homogeneous polynomial of degreenin the coefficients of the matrix Aij vanishes.

If detA= 0, the homogeneous system (2.2) has solutions of the form (in fact this is asinglesolution, see below)

Zj = ˇA k j Ck, (2.4) where ˇA k j is a minor – determinant of the (n−1)×(n−1) matrix, obtained by deleting the j-th row and k-th column from then×nmatrixA It satisfies:

A j i Aˇ k j =δ k i detA, Aˇ k j A i k =δ i j detA (2.5) and δdetAXn i,j=1

Equation (2.4) provides a solution to equation (2.2) for any selected parameters \( C_k \), as an immediate consequence of equation (2.5), assuming that \( \text{det} A = 0 \) However, due to the implications of equation (2.5), the transformation \( C_k \rightarrow C_k + A l_k B_l \) for any \( B_l \) does not alter the solution in equation (2.4) Consequently, there exists a one-parameter family of solutions in equation (2.4), where different selections of \( C_k \) yield projectively equivalent \( Z_j \).

If rank ofAis smaller thann−1 (corank(A)>1), then (2.4) vanishes, and non-vanishing solution is given by

The notation Aˇ {k} {j} represents the minor of the (n−q)×(n−q) matrix derived by removing the rows indexed by {j} and the columns indexed by {k} from matrix A Furthermore, for most parameter selections C, the solutions are equivalent, resulting in a q-dimensional solution space when the corank of A is equal to q.

Non-homogeneous equations

Solution to non-homogeneous system (2.1) exists and is unique when detA 6= 0 Then it is given by the Craemer rule, which we present in four different formulations.

Craemer I: Zj Aˇ k j ak detA = A −1 k jak (2.8)

With the help of (2.6), this formula can be converted into

Craemer II: Zj =∂log detA

Given thek−thcomponentZk of the solution to non-homogeneous system (2.1), one can observe that the following homogeneousequation:

(no sum over k in this case!) has a solution: zj = Zj for j 6= k and zk = 1 This means that determinant of associatedn×nmatrix

[A (k) ] j i (Zk)≡(1−δ k j )A j i +δ k j (A k i Zk−ai) (2.11) vanishes This implies thatZ k is solution of the equation

The left-hand side of the equation represents a linear function of z, expressed as n×n det[A(k)]ji(z) = z det A - det A~(k)a Here, the n×n matrix A(k)~a is derived by replacing the k-th column of A with ~a, denoted as Akj → aj This leads us to the standard form of Cramer’s rule as outlined in equation (2.12).

Craemer IV: Zk= detA (k) ~ a detA (2.14)

For a non-homogeneous system represented by equation (2.1), a solution exists only when the determinant of matrix A, denoted as detA, is equal to zero In this case, the vector must be properly constrained to lie within the image of the linear map A(z).

Non-linear equations

Homogeneous non-linear equations

In linear algebra, it is essential to differentiate between homogeneous and non-homogeneous equations In the case of homogeneous (projective) equations, non-vanishing solutions exist if and only if the coefficients of all equations meet a single constraint.

The solution to a system of homogeneous equations, denoted as R{system of homogeneous eqs} = 0, can be expressed algebraically through R-functions, similar to Cramer's rule as discussed in section 2.2.2 The R-function, which serves as the resultant of the system, is characterized by two key parameters: the number of variables (n) and the corresponding powers (s1, , sn) Specifically, this pertains to a homogeneous system comprising n polynomial equations of degrees s1, , sn with n variables represented as ~z = (z1, , zn).

A j i 1 j si zj 1 zj si = 0 (2.16) has non-vanishing solution (i.e at least onezj6= 0)iff

Resultant is a polynomial of the coefficientsAof degree ds 1 , ,s n= deg A R s 1 , ,s n Xn i=1

When all degrees are equal (s1 = s2 = = sn = s), the resultant Rn|s of degree n|s can be simplified to depend solely on two parameters, n and s The generic form R s 1, , s n can be easily transformed into Rn|s by multiplying equations by appropriate powers of zn, which equalizes all powers and introduces new solutions that can be systematically excluded This process reveals R s 1, , s n as an easily identifiable irreducible factor of Rn|max(s1, , sn).

The expression Ai(~z) in equation (2.16) represents a mapping from projective space P n−1 to itself, while Rn|s serves as a functional applicable to maps of degree s within this space This interpretation highlights the distinction between the indices i and j1, , js.

Ai(~z) =A j i 1 j s zj 1 zj s: j’s are contravariant, whileicovariant.

In projective space P n−1, one-parametric solutions of homogeneous equations arise when the resultant vanishes, while the resultants of the subsystems, akin to minors, do not These solutions manifest as discrete points, with the total number of points representing the branches of the original solution.

Of course, in the particular case of the linear maps (when all s = 1) the resultant coincides with the ordinary determinant:

Forn= 0 there are no variables and we assumeR0|s≡1.

Forn= 1 the homogeneous equation of one variable isAz s = 0 andR 1|s =A.

In the simlest non-trivial case ofn= 2 the two homogeneous variables can be namedx=z1andy=z2, and the system of two equations is

(x−λjy) =y s A(t) and˜ B(x, y) Xs k=0 bkx k y s−k =bs

(x−àjy) =y s B(t),˜ wheret=x/y Its resultant is just the ordinary resultant [21] of two polynomials of a single variablet:

0 0 0 as as−1 as−2 as−3 a0 bs b s−1 b s−2 b1 b0 0 0 0

When the powers \( s_1 \) and \( s_2 \) of two polynomials differ, the resultant can be represented as the determinant of an \( (s_1 + s_2) \times (s_1 + s_2) \) matrix This matrix is structured such that the first \( s_2 \) rows contain the coefficients of the polynomial of degree \( s_1 \), while the last \( s_1 \) rows contain the coefficients of the polynomial of degree \( s_2 \) This definition underscores the term "resultant" in a general context In the specific scenario of a linear map (where \( s = 1 \)), equation (2.21) simplifies to the determinant of a \( 2 \times 2 \) matrix, yielding \( R_{2|1}\{A\} = R_{est} a_1 t + a_0, b_1 t + b_0 \).

Solution of systems of non-homogeneous equations: generalized Craemer rule

The concept of the resultant, initially defined for homogeneous equations, can also effectively solve non-homogeneous equations This approach simplifies the problem to finding solutions for ordinary algebraic equations with a single variable, representing a non-linear generalization of the traditional Cramer's rule We will start with a specific example before outlining the general procedure.

Consider the system of two non-homogeneous equations on two variables: q111x 2 +q112xy+q122y 2 =ξ1x+η1y+ζ1, q211x 2 +q212xy+q222y 2 =ξ2x+η2y+ζ2 (2.22) Homogeneous equation (with allξi, ηi, ζi= 0) is solvable whenever

(double vertical lines denote determinant of the matrix) As to non-homogeneous system, if (X, Y) is its solution, then one can make an analogue of the observation (2.10): thehomogeneoussystems

 q111x 2 + q112Y −ξ1 xz+ q122Y 2 −η1Y −ζ1 z 2 = 0, q211x 2 + q212Y −ξ2 xz+ q222Y 2 −η2Y −ζ2 z 2 = 0 (2.25) have solutions (z, y) = (1, Y) and (x, z) = (X,1) respectively Like in the case of (2.10) this implies that the corresponding resultants vanish, i.e thatX satisfies q111X 2 −ξ1X−ζ1 q112X−η1 q122 0

The variables have been separated, allowing for the definition of components X and Y from distinct algebraic equations Consequently, solving the system of non-linear equations simplifies to addressing individual algebraic equations This reduction's algebro-geometric significance warrants further investigation.

Although variables X and Y appear to be independent in equations (2.26) and (2.27), there is a subtle correlation between their solutions Each equation is a fourth-degree polynomial in its respective variable, meaning that selecting one of the four possible values for X directly determines a corresponding value for Y Consequently, the total number of solutions to equation (2.22) is 4.

For small non-homogeneity we have:

This asymptotic behavior is obvious on dimensional grounds: dependence on free terms likeζ should beX ∼ζ 1/r , onx−linearterms likeξ orη –X ∼ξ 1/(r−1) etc.

The non-linear Cramer’s rule resembles its linear version, with the key difference being the use of the resultant in place of the determinant Specifically, the k-th component \( Z_k \) of the solution to a non-homogeneous system adheres to non-linear Cramer’s rule III, expressed as \( R_{s_1, \ldots, s_n}^n A^{(k)}(Z_k) \).

Tensor [A (k) (z)] j i 1 j si in this formula is obtained by the following two-step procedure:

1) With the help of auxiliary homogeneous variablez0transform original non-homogeneous system into a homo- geneous one (by inserting appropriate powers ofz0into items with unsufficient powers of otherz-variables) At this stage we convert the original system ofnnon-homogeneous equations ofnhomogeneous variables{z1, , zn}into a system ofnhomogeneous equations, but ofn+ 1 homogeneous variables{z0, z1, , zn} Thek-th variable is in no way distinguished at this stage.

2) Substitute instead of thek-th variable the productzk=z0zand treatzasparameter, not a variable We obtain a system ofnhomogeneous equations ofnhomogeneous variables{z0, z1, , zk−1, zk+1, , zn}, but coefficients of this system depend onkand onz If one now renamesz0intozk, the coefficients will form the tensor [A (k) (z)] j i 1 j si

It remains to solve the equation (2.30) w.r.t z and obtain Z k Its degree in z can be lower than ds 1 , ,s n Pn j=1

The absence of 'z' in all coefficients [A(k)(z)] for j ranging from 1 to si indicates that Qn i6=jsj Additionally, solutions for Zk from discrete sets can be correlated to create a comprehensive solution for the original system, as discussed in section 3.2.3 Consequently, the total number of distinct solutions {Z1, , Zn} is represented by the product #s1, , sn = Qn i=1 si.

In section 3.4.4, an alternative phrasing of the procedure is presented, highlighting that the Cramer rule, within the realm of non-linear algebra, is related to Vieta's formulas for the roots of polynomials Additionally, it possesses further generalizations that have not yet been specifically named.

3 Evaluation of resultants and their properties

Summary of resultant theory

Tensors, possessing a resultant: generalization of square matrices

Resultant is defined for tensorsA j i 1 j s andG ij 1 j s , symmetric in the lastscontravariant indices Each index runs from 1 ton Indexican be both covariant and contravariant Such tensor hasnM n|s independent coefficients with

TensorAcan be interpreted as a mapVn→Vn of degrees=sA=|A|= deg z A(z),

It takes values in the same spaceVn as the argument~z.

The tensor G maps vectors to covectors, allowing for the treatment of all contravariant indices on equal footing Specifically, the gradient of G can be expressed as G i (~z) = ∂z ∂ i S(~z), where S(~z) is a homogeneous symmetric function of n variables z1, , zn, with a degree of r = s + 1 The gradient tensor G exhibits total symmetry across its s + 1 contravariant indices, resulting in a reduction of its independent coefficients to M n|s+1 = (n−1)!(s+1)! / (n+s)!.

The key distinction between the two maps is that only the map A: Vn → Vn can be iterated, allowing for the composition of multiple such maps In contrast, the map G: Vn → Vn* can only be composed with maps of different types.

Definition of the resultant: generalization of condition det A = 0 for solvability of system of homogeneous linear

Vanishing resultant is the condition that the map Ai(~z) has non-trivial kernel, i.e is the solvability condition for the system of non-linear equations: system n

Ai(~z) = 0o has non-vanishing solution~z6= 0 iff Rn|s{A}= 0 Similarly, for the mapG i (~z): system n

The equation G i (~z) = 0o has a non-vanishing solution when ~z ≠ 0 if and only if R n|s {G} = 0 Although Ai(~z) and G i (~z) map to different target spaces and lack a distinguished isomorphism for n > 2, the resultants R{A} and R{G} are nearly identical To derive R{G}, one can replace all components A i in R{A} with G i , with the exception of an A and G-independent normalization factor, which is often irrelevant This factor illustrates the differing transformation properties under the extended structure group GL(n)×GL(n), as both R{A} and R{G} are SL(n)×SL(n) invariants but acquire distinct factors during transformations These characteristics are reminiscent of determinant theory in linear algebra, and we will primarily focus on R{A}, typically overlooking the distinctions between covariant and contravariant resultants.

Degree of the resultant: generalization of d n|1 = deg A (det A) = n for matrices

ResultantRn|s{A}has degree d n|s = deg A Rn|s{A}=ns n−1 (3.1) in the coefficients ofA.

Iterated resultant ˜Rn|s{A}, see s.3.2 below, has degree d˜ n|s = deg A R˜n|s{A}= 2 n−1 s 2 n−1 −1

Iterated resultant ˜Rn|s{A}depends not only onA, but also on the sequence of iterations; we always use the sequence encoded by the triangle graph, Fig.4.A.

Multiplicativity w.r.t composition: generalization of det AB = det A det B for determinants 21

For two maps A(z) andB(z) of degreessA= deg z A(z) andsB= deg z B(z) the composition (A◦B)(z) =A(B(z)) has degreesA◦B =|A◦B|=sAsB In more detail

Multiplicativity property of resultant w.r.t composition:

This formula is nicely consistent with that for d n|s and with associativity of composition We begin from associativity Denoting degrees of byA, B, C by degrees α, β, γ, we get from

Since the two answers coincide, associativity is respected:

R n|αβγ (A◦B◦C) =R n|α (A) (βγ) n−1 R n|β (B) α n γ n−1 R n|γ (C) (αβ) n (3.3) The next check is of consistency between (3.2) and (3.1) According to (3.1)

RN |α(A)∼A d N|α and therefore the composition (A◦B) has powerαβ in z-variable and coefficients∼AB α : z→A(Bz β ) α Thus

If it is split into a product ofR’s, as in (3.2), then – from power-counting in above expressions – this should be equal to:

In other words the powers in (3.2) are: dN |αβ d N|α = (αβ) N−1 α N−1 =β N−1 and αdN |αβ dN |β

Resultant for diagonal maps: generalization of det

We call maps of the special formAi(~z) =Aiz s i diagonal For diagonal map

To achieve non-vanishing solutions, it is essential for at least one of the coefficients \( A_i \) to be zero; this allows the corresponding \( z_i \) to yield non-vanishing solutions Subsequently, the common powers \( n-1 \) can be easily derived from equation (3.1).

Resultant for matrix-like maps: a more interesting generalization of det

Diagonal maps offer a broader generalization while remaining within the matrix theory framework We refer to maps of the specific form Ai(z) = Σn j=1 A j i z j as matrix-like, and they can also be parameterized accordingly.

For the matrix-like map

The iterated resultant, detailed in section 3.2, is constructed using the triangle graph illustrated in Fig 4.A Its multiplicative decomposition for the diagonal map is notably reducible, comprising multiple factors beyond just two, yet remains explicit in its representation.

Structure and notation is clear from the particular example, see eq.(3.17) below:

The resultant itself is given by the first factor, but in another power: s n−1 =s 5 out of totals 2 n−1 −1 =s 31 ,

Additive decomposition: generalization of det A = P σ ( − ) σ Q

σ(−) σ Q iA σ(i) i for determinants Like determinant is obtained from diagonal termQn i=1a i i by permutations, the resultant for genericẴz) is obtained~ by adding to thematrix-likecontribution detij a j j i s n−1

Numerous terms differ from (3.7) through specific permutations of upper indices among the n−1 determinants in the product This concept is illustrated by the example provided, where we frequently denote a 1 as a and a 2 as b in similar cases with n= 2.

The total count of independent elementary determinants is given by the formula n!(M_n|s / (n|s - n)!), where M_n|s equals (n+s−1)! / ((n−1)!s!) This calculation involves summing various products of s times n−1 elementary determinants, with certain products either contributing nothing or being included with coefficients that are not equal to one.

Elementary determinants can be effectively represented using varying indices, denoted as Uν(α) 1, ν2, , νn−1, where ν1 represents the count of indices of 1, ν2 represents the count of indices of 2, and so forth It is important to note that νn is not independent, as the total number of indices is constrained by the equation ν1 + ν2 + + νn−1 + νn = ns This structured approach allows for a clearer understanding of the relationships between the indices in elementary determinants.

For larger values of n and s, the set {ν1, ν2, , νn−1} does not uniquely define Uν1, ,νn−1, allowing for different distributions of indices To address this ambiguity, we introduce an additional superscript (α) In instances with smaller n and s, we denote U(1) as U and U(2) as V, for clarity This framework enables us to illustrate the subsequent example effectively.

2|3 −2)! = 2!2! 4! = 6 (M 2|3 = 1!3! 4! = 4)linearlyindependent elementary determinants given by

Equation (3.9) can be expressed in various forms due to the presence of two non-linear relationships among the ten cubic combinations that adhere to the appropriate gradation number, where the sum of indices equals nine This is derived from six elementary determinants that rely solely on eight independent coefficients: a111, a112, a122, a222, b111, b112, b122, and b222 The two cubic relationships emerge from multiplying a singular quadratic equation by U3 and V3.

The nextR2|4 is a linear combination of quartic expression made from 10 elementary determinants

In general there are (2n)!(M M n|s ! n|s −2n)! quadratic Plukker relations betweenn×nelementary determinants: for any set α1, , α2n of multi-indices (of lengths)

Evaluation of resultants

This article explores three distinct approaches to a specific problem, utilizing elementary algebra through the theory of polynomial roots, linear algebra via homological methods, and tensor algebra informed by the theory of Feynman diagrams The primary method discussed is an iterative procedure that involves calculating ordinary resultants with respect to one variable at a time This process generates a series of iterated resultants linked to various simplicial complexes, with the resultant serving as a common irreducible factor for all iterated resultants, as detailed in section 3.2.

– Resultant can be defined as determinant of Koshul differential complex, it vanishes when Koshul complex fails to be exact and acquires non-trivial cohomology, see s.3.3.

The resultant is an invariant of the SL(n)×SL(n) group and can be expressed through a specific combination of Feynman-like diagrams This comprehensive set of diagrams illustrates the structure of the tensor algebra related to the corresponding tensor.

Iterated resultants and solvability of systems of non-linear equations

Definition of iterated resultant ˜ R n|s { A }

Let us consider a system ofnhomogeneous equations

In the context of homogeneous polynomials Ai(~z) with n variables ~z = (z1, , zn), the system is overdefined, and non-vanishing solutions can be obtained only by imposing the constraint R{A} = 0 on the polynomial coefficients This section aims to express this constraint by utilizing a series of iterated resultants.

Let Resz i(A1, A2) represent the resultant of two polynomials A1(~z) and A2(~z), treating them as polynomials in the single variable zi while considering the other variables zj as constant parameters We will now introduce ˜R k {A1, , Ak} through an iterative process.

The lowest entries of the hierarchy are (see Fig.4.A):

Two polynomialsf(z) andg(z) of a single variable have a common root iff their ordinary resultant Resz(f, g) = 0. From this it is obvious that for (3.10) to have non-vanishing solutions one should have

However, inverse is not true: (3.13) can have extra solutions, corresponding to solvability of subsystems of (3.10) instead of entire system What we need is anirreducible componentR{A} ≡irf

In addition to the resultant R{A} from equation (3.13), numerous other iterated resultants, derived from permutations of z-variables as illustrated in Fig 4.B, will also vanish The resultant R{A} serves as a common divisor for all these iterated resultants.

Actually, analytical expressions look somewhat better for Fig.4.B than for Fig.4.A, and we use Fig.4.B in examples below.

Linear equations

LetAi(~z) =Pn j=1a j i zj In this case the solvability condition is nothing but deta j i = 0.

Let us see now, how it arises in our iterated resultant construction For linear functions Ai(~z) and ˜a k i (~z) Pn j=ka j i zj

The sequences of iterations in defining iterated resultants reveal two distinct ordering methods The first, illustrated by a triangle graph, presents the most "ordered" perspective as expressed in equation (3.12) The second ordering reflects a "natural" iteration procedure, as shown in equations (3.15) and (3.16) These visual representations highlight that the selection of the iteration sequence corresponds to a specific simplicial structure applied to the set of equations.

(superscripts are indices, not powers!) Substituting now ˜a 2 1 =a 2 1 z2+ ˜a 3 1 and ˜a 2 2 =a 2 2 z2+ ˜a 3 2 , we find

The presence of the factor a₁₁ on the right-hand side indicates that when a₁₁ equals zero, both Resz₁(A₁, A₂) and Resz₁(A₁, A₃) are proportional to ˜a²₁ = a²₁z₂ + ˜a₃₁, sharing a common root at z₂ = -˜a₃₁/a²₁, leading to the vanishing of ˜R₃ However, this condition does not yield a non-trivial solution for the entire system, as the z₁ roots of A₂ and A₃ differ unless the 3×3 determinant also equals zero.

To make the next step, substitute ˜a 3 i =a 3 i z3+ ˜a 4 i , and obtain

This ˜R n is a homogeneous polynomial of powern+Pn−2 k=12 n−2−k k= 2 n−1 ina’s.

R n {A1, , An}= det1≤i,j≤n a j i , (3.18) providing the solvability criterium of the system oflinearequations, is the last factor in the product (3.17) It can be obtained from ˜Rby inverse iterative procedure:

On the origin of extra factors in ˜ R

Though the linear example illustrates well the very fact that ˜Ris reducible, the origin of the extra factors ˜R/Ris somewhat specific in this case.

Let Ai(z) be polynomials of degrees in their variablesz Then vanishing of, sayR 3 {A1, A2, A3} implies that there exists a valueY ofz2-variable, such that

Resz 1(A1, A3)|z 2 =Y = 0, (3.20) i.e that there are some valuesX2andX3 ofz1-variable, such that

(all other zi-variables with i ≥3 are considered as sterile parameters in this calculation) The solvability of the

(3.23) of original system requires thatX2=X3, but this is not necessarily the case.

In generalthe equationA1(x, Y) = 0 forxhassdifferent rootsx=Xà(Y),à= 1, , s, and our ˜R 3 {A1, A2, A3} gets contributions from common solutions of

The actual multiplicity of the system represented by equation (3.24) is less than 2, as the individual Y-resultants for specific values of à and ν are not polynomials of the coefficients of A's Instead, there are s independent factors, with only one being the irreducible resultant R 3 {A1, A2, A3}, which is derived from the product over all s values of à and ν.

Analysis is similar for higherR k

The extraction of R from ˜R allows for the evaluation of a set ˜R with reordered z-variables, yielding resultants that depend on specific coefficients of A This approach is particularly useful when the resultant R is required for a one-parameter family of polynomials Ai(z).

In the linear case where s = 1, a unique situation arises as the analysis indicates that no factors should exist; however, the non-general position has a codimension of one For a linear function A1(x, Y) to possess multiple x-roots, it suffices to impose a single condition, specifically a11 = 0 Likewise, in the context of the linear equations Ai(z1, , zk, Zk+1) = 0, where i = 1, , k, if l ≥ k + 2, only one condition, det1≤i,j≤k aji = 0, is required to yield multiple non-vanishing solutions for z1 = Z1, , zk = Zk This phenomenon accounts for the appearance of extra factors in linear systems In contrast, for higher values of s (s ≥ 2), the non-general positions exhibit higher codimension, which does not influence the structure of the solvability constraints represented by ˜R = 0.

Quadratic equations

Let nowAi(z) =Pn j,k=1a jk i zjzk Then

(3.25) Substituting now ˜ a 12 1 =a 12 1 z2+ ˜a 13 1 , ˜ a 22 1 =a 22 1 z 2 2 + ˜a 23 1 z2+ ˜a 33 1 , ˜ a 12 2 =a 12 2 z2+ ˜a 13 2 , ˜ a 22 2 =a 22 2 z 2 2 + ˜a 23 2 z2+ ˜a 33 2 , (3.26) we can find

The polynomial ˜R2 is of degree 4 in z2, leading to ˜R3 being a polynomial of degree 8 in the coefficients of ˜R2, which are quartic in a's Consequently, the total degree of ˜R3 in terms of a is 32, which equals 12 plus 20 The symmetric resultant R3 {A1, A2, A3} is identified as the irreducible factor of degree 12.

An example of cubic equation

Take for a cubic form – a cubic function of a single vector withthreecomponents x, y, z–

3cz 3 + 2ǫxyz (3.28) i.e non-vanishing elementsS ijk are:

The resultant of the system∂S~ = 0,

(3.30) is equal to degree-twelve polynomial of the coefficients Sijk,

Indeed, the typical resultants of pairsof equations in (3.30) are:

Iterated resultant depends on symplicial structure

The iteration procedure used to define the resultant ˜R is influenced by the selected iteration sequence Figures 4 illustrate various iterated resultants ˜R for the same set A, highlighting the differences between the simplicial structures shown in Figs 4.A and 4.B Notably, the resultant R{A} serves as a common divisor for all R{˜ A|Σ}, encompassing all potential simplicial structures Σ.

Resultants and Koszul complexes [4]-[8]

Koszul complex I Definitions

Let \( P_{n|s} \) represent the linear space of homogeneous polynomials of degree \( s \) in \( n \) variables, which has a dimension given by the formula \( M_{n|s} = \frac{(n+s-1)!}{(n-1)!s!} \) A suitable linear basis for this space is formed by the collection of all monomials \( z_{i_1} \cdots z_{i_n} \) where \( 1 \leq i_1 \leq \ldots \leq i_n \leq n \) Our set is denoted as \( \{ A_i(\mathbf{z}) \} \in \bigoplus_{j=1}^{n} P_{n|s_j} \).

With a map Ai(z) one can associate a nilpotent operator dˆXn i=1

, dˆ 2 = 0 (3.35) with auxiliary Grasmannian (anticommuting and nilpotent) variables θi, θiθj+θjθi = 0 Given such ˆd, one can define Koszul complex:

The non-linear map of projective spaces, denoted as P n−1 → P n−1, is referenced but not utilized in this section, while the set of polynomials continues to be termed a map The theory of the Koszul complex remains consistent, even when the index i is contravariant, as seen in tensors T that describe poly-linear forms and their reductions In this context, the differential operator ˆ d is defined by substituting ∂/∂θ i with θ i to enable contraction with contravariant T i To maintain the structure of the complex, the k-th powers of θ's in all vector spaces are replaced with their dual (n − k)-th powers, ensuring dimensional consistency This allows for the substitution of θ i with anticommuting forms dx i, transforming the Koszul complex into one governed by the nilpotent operator ˆ d = T i (~ x)dx i, which operates through wedge multiplication in the space of differential forms, organized by rank In the gradient case, T i (~ x) is expressed as ∂ i T (~ x), leading to ˆ d = dT, while the degrees are defined by the original map, Ai(z) ∈ P s i, and the degree of the endpoint polynomials, deg z H(z) = p, resulting in s(i1, , ik) = p − si 1 − − si k.

The Koszul complex is characterized by its exactness, which holds true unless the resultant \( R(s_1, \ldots, s_n) = 0 \), linking it to the cohomologies of the complex The resultant is represented by the determinant of this complex, and when bases are selected in all spaces \( H^{i_1 \ldots i_k} \), the Koszul complex transforms into a collection of rectangular matrices The determinant is then defined as the alternating product of the maximal minors of these matrices, remaining invariant under linear transformations of the bases.

To simplify calculations in practice, it is beneficial to adjust the degree m so that only a few terms in the complex contribute, leading to significant simplification When degree m is set so that only the last matrix remains non-vanishing and square, the result equates to the determinant of that matrix However, achieving such a precise adjustment is often challenging Notably, two elementary scenarios exemplify this: when s1 = = sn = 1, which pertains to ordinary n×n matrices, and when n = 2, relating to the ordinary resultants of two polynomials of degrees s1 and s2 in a single variable.

In complex scenarios, it is possible to minimize the number of non-vanishing matrices by adjusting the variable m to be as small as feasible, allowing for the extraction of meaningful results from the data.

Now we consider a few examples and afterwards return to some more details about the general case.

Linear maps (the case of s 1 = = s n = 1)

Minimal option, only one term in the complex is non-trivial:

In this cases1= .=sn= 1,Ai(z) =Pn j=1A j i zj, one can choosep= 1 and then the entire complex reduces to the last term:

The set of constants, represented as polynomials of degree zero (α1, , αn) from the copies of Pn|0, is transformed into linear functions through the equation Ai(~z) = Σ(αi Aji zj) In this mapping, the bases {αi} in the left space and {zj} in the right space are interconnected via the matrix Aji, with the overall result determined by the determinant of this matrix.

Two terms contributing, an example:

If for the same collection of linear maps,s1= .=sn = 1,Ai(z) =Pn j=1A j i zj, we takep= 2 instead ofp= 1, then the complex reduces to the lasttwoterms:

−→ P2|n−→0 The first map takes the set of n(n−1) 2 constantsα ij =−α ji into

Ak(~z) ∂θ ∂ k α ij θiθj =Ak(~z)(α ki −α ik )θi i.e into

−2Pn k=1α ik Ak(~z) =−2Pn j,k=1α ik A j k zj, i.e described by rectangular n(n−1) 2 ×n 2 matrix The second map takes the set ofn linear functionsPn j=1β ij zj into the n(n+1) 2 linear space of quadratic functions,

Ak(~z) ∂θ ∂ k β ij zjθi β ij Ai(~z)zj =β ij A k i zjzk described by rectangularn 2 × n(n+1) 2 matrix.

For example, if n= 2 the two matrices are 1×4 and 4×3:

It is easy to check thatP4 a=1B a jA a = 0 (i.e that ˆd 2 = 0) and ǫaa 1 a 2 a 3B a j 1 1 Bj a 2 2 Bj a 3 3 ǫ j 1 j 2 j 3 =A a R2|1{A} withR2|1{A}= det

A 1 2 A 2 2 and totally antisymmetric tensorsǫa 1 a 2 a 3 a 4 andǫ j 1 j 2 j 3 of ranks 3 and 4. m terms contributing:

In general, for linear maps s1= .=sn = 1 the number of non-trivial terms in Koszul complex is equal tom, and complex is a collection of mrectangular matrices of sizes (n−m)! M n|0 n! × (n−m+1)! M n|1 n! , , (n−m+k)! M n|k n! × (n−m+k+1)! M n|k+1 n! , .,

M n|m−1 n×M n|m Still alternated combinations of minors provides the same quantityR2|1{A}= det2×2A.

5 We are indebted to A.Gorodentsev for comments about this method.

A pair of polynomials (the case of n = 2)

Minimal option, only one term in the complex is non-trivial:

Forn= 2A i (z) consists of two polynomials f(z) =Ps 1 k=0fkz k ,g(z) =Ps 2 k=0gkz k , and the complex reduces to the last term,

−→ P2|p ifpis adjusted to make the matrix square: M 2|p−s 1 +M 2|p−s 2 =M 2|p , i.e (p−s1+ 1) + (p−s2+ 1) =p+ 1 or p=s1+s2−1 With this choicep−s1=s2−1< s2 andp−s2=s1−1< s1, so that the preceding term in the complex would involve negative degrees and thus does not contribute.

With this choice of pthe map ˆdis: p−s X 1 i=0 αiz i X s 1 k=0 fkz k

+ p−s X 2 i=0 βiz i X s 2 k=0 gkz k and in the basis{αk, βk}in the left space and {z 0 , , z m } in the right space the matrix looks

(3.38) and determinant of this (s1+s2)×(s1+s2) matrix (where s1+s2 =M 2|s 1 +s 2 −1) is exactly the resultant off(z) andg(z), cited in 2.21.

A triple of polynomials (the case of n = 3)

For three polynomialsf1(~z), f2(~z), f3(~z) of three homogeneous variables of degreess1,s2 ands3 the last two terms of Koszul complex are:

In the context of ordered triples denoted as i, j, and k, where if i equals 1 then j equals 2 and k equals 3, the dimensions of three linear spaces can be expressed mathematically Specifically, the dimensions are given by the formulas P3 i=1M 3|p−s j −s k = 1/2 P3 i=1(p−sj−sk+ 1)(p−sj−sk+ 2) and P3 i=1M 3|p−s i = 1/2 P3 i=1(p−si+ 1)(p−si+ 2) Additionally, M 3|p is calculated as (p+1)(p+2)/2 Notably, the middle dimension is equal to the sum of the other two dimensions, represented by P3 i=1(p−si+ 1)(p−si+ 2) (p+ 1)(p+ 2) + P3 i=1(p−sj−sk+ 1)(p−sj−sk+ 2), under the conditions that either p equals s1+s2+s3−2 or p equals s1+s2+s3−1 In this scenario, the middle space forms a direct sum of the other two spaces, and the ratio of the determinants of the resulting square matrices yields the resultant R s 1 ,s 2 ,s 3{f1, f2, f3}.

Example: Lets1 =s2=s3= 2 and take a special family of maps: fi=aiz i 2 + 2ǫzjzk (a1=a,a2=b,a3=c, z1=x,z2=y,z3=z) The first map

(α, β, γ)−→ d β(cz 2 + 2ǫxy)−γ(by 2 + 2ǫxz), γ(ax 2 + 2ǫyz)−α(cz 2 + 2ǫxy), α(by 2 + 2ǫxz)−β(ax 2 + 2ǫyz) is described by the 3×18 matrix:

 x 2 y 2 z 2 xy yz zx x 2 y 2 z 2 xy yz zx x 2 y 2 z 2 xy yz zx

(all other entries are zeroes) The second map, ξ1(~z), ξ2(~z), ξ3(~z) d

 x 4 y 4 z 4 x 3 y x 3 z y 3 x y 3 z z 3 x z 3 y x 2 y 2 y 2 z 2 z 2 x 2 x 2 yz y 2 xz z 2 xy x 2 a 2ǫ y 2 2ǫ a z 2 2ǫ a xy a 2ǫ yz 2ǫ a zx a 2ǫ x 2 2ǫ b y 2 b 2ǫ z 2 2ǫ b xy b 2ǫ yz b 2ǫ zx 2ǫ b x 2 2ǫ c y 2 2ǫ c z 2 c 2ǫ xy 2ǫ c yz c 2ǫ zx c 2ǫ

Then for any triple of indices 1≤a˜1

Ngày đăng: 27/05/2022, 15:36

Nguồn tham khảo

Tài liệu tham khảo Loại Chi tiết
[2] A.Dolotin and A.Morozov, The Universal Mandelbrot Set. Beginning of the Story, World Scientific, 2006; Alge- braic Geometry of Discrete Dynamics. The case of one variable, hep-th/0501235 Sách, tạp chí
Tiêu đề: The Universal Mandelbrot Set. Beginning of the Story
Tác giả: A. Dolotin, A. Morozov
Nhà XB: World Scientific
Năm: 2006
[6] L. Schl¨ afli, Uber die Resultante eines Systems mehrerer algebraischen Gleichungen, Denk.der Keiser. Akad. der ¨ Wiss., math-naturwiss. Klasse, 4 Band, 1852; Gesammelte Abhandlungen, Band 2, s. 9-112, Birkh¨ auser Verlag, Basel, 1953 Sách, tạp chí
Tiêu đề: Uber die Resultante eines Systems mehrerer algebraischen Gleichungen
Tác giả: L. Schl¨ afli
Nhà XB: Birkh¨ auser Verlag
Năm: 1852
[8] A. Gorodentsev and B. Shapiro, On associated discriminants for polynomials of one variable, Beitrage Algebra Geom. 39 (1998) 53-74 Sách, tạp chí
Tiêu đề: On associated discriminants for polynomials of one variable
Tác giả: A. Gorodentsev, B. Shapiro
Nhà XB: Beitrage Algebra Geom.
Năm: 1998
[21] S.Lang, Algebra (1965) Addison-Wesley Publ.;A.Kurosh, Course of High Algebra (1971) Moscow;MAPLE and Mathematica contain programs for evaluating resultants and discriminants of ordinary polynomi- als. Hopefully soon they will be extended to evaluation of resultants and discriminants for arbitrary systems of non-linear and poly-linear equations Sách, tạp chí
Tiêu đề: Algebra
Tác giả: S. Lang
Nhà XB: Addison-Wesley Publ.
Năm: 1965
[24] J.-P. Serre, A course in arithmetic, Springer, 1973;S.Lang, Elliptic Functions (1973) Addison-Wesley Publ.;A.Weil, Elliptic Functions according to Eisentstein and Kronecker (1976) Springer-Verlag (Mir, Moscow, 1978);N.Koblitz, Introduction to Elliptic Curves and Modular Forms (1984) Springer-Verlag Sách, tạp chí
Tiêu đề: A course in arithmetic
Tác giả: J.-P. Serre
Nhà XB: Springer
Năm: 1973
[27] E.Akhmedov, V.Dolotin and A.Morozov, Comment on the Surface Exponential for Tensor Fields, JETP Letters 81 (2005) 639-643 (Pisma v Zh.Eksp.Teor.Fiz. 81 (2005) 776-779) hep-th/0504160 Sách, tạp chí
Tiêu đề: Comment on the Surface Exponential for Tensor Fields
Tác giả: E. Akhmedov, V. Dolotin, A. Morozov
Nhà XB: JETP Letters
Năm: 2005
[4] A.Cayley, On the Theory of Linear Transformations, Camb.Math.J. 4 (1845) 193-209;see [14] for recent show-up of Cayley’s 2 × 2 × 2 hyperdeterminant in string-theory literature, where it appears in the role of the SL(2) 3 invariant Khác
[7] I.Gelfand, M.Kapranov and A.Zelevinsky, Discriminants, Resultants and Multidimensional Determinants (1994) Birkhauser Khác
[9] V.Dolotin, On Discriminants of Polylinear Forms, alg-geom/9511010 [10] V.Dolotin, On Invariant Theory, alg-geom/9512011 Khác
[17] A.Gerasimov, A.Morozov and K.Selivanov, Bogolubov’s Recursion and Integrability of Effective Actions, Int.J.Mod.Phys. A16 (2001) 1531-1558, hep-th/0005053 Khác
[22] N.Berkovits, JHEP 0004 (2000) 018, hep-th/0001035; JHEP 0409 (2004) 047, hep-th/0406055;A.Gorodentsev and A.Losev, Lectures at ITEP-DIAS School, July 2004 Khác

TÀI LIỆU CÙNG NGƯỜI DÙNG

  • Đang cập nhật ...