About Me edit

Hello! My name is Amber. I'm a huge weeb. haha! I like doing math stuff and watching anime. I like kiss anime as they seem to have the largest variety. Here are some handy notes that I've written up on some topics I like. Look for the appropriate section.

Examples and derivation of non-integer factorials edit

The case of three factorial edit

It has been mentioned that the gamma function was able to fill in the place of the factorial. The motivating logic behind this is mainly the Integration by parts rule. That is:

 

for some u and v of x over some boundary C. Consider the integral representation of the factorial of s:

 

Apply the Integration by parts rule for s = 3:

 

In the case of factorials, one generally sets the power term to u.

 
 

One can then apply Integration by parts:

 

Observe how it became 3 times the factorial of 2. Take the limits of the left side:

 

One can use L'Hôpital's rule to show that this approaches zero. This implies that:

 

Regarding the limits of:

 

the power always reaches 0 by derivation before the exponential. For any n in this limit, it is shown to go to zero via L'Hôpital's rule. Regarding the integral part on the right, if one were to work out the second part for 3!, they would get:

 

Integrating again yields:

 

The general case for integers edit

In general, it is seen that the integral on the right produces the factorial of a number that is one smaller multiplied by the number one is taking the factorial of in the first place. For every step in integration via L'Hôpital's rule, the limit regarding the power and exponential terms is shown to go to zero. The logic of Integration by parts shows that one always gets this cascading product on the right side. That is:

 
 
 

This meets the requirement for the factorial in the sense that:

 

One could then use the logic of Integration by parts and L'Hôpital's rule to conclude that, for integers,

 

One can see that the one in this product correlates to:

  or
 

depending on how one wants to view it. In this sense, it may even make sense to say that:

 

for any integer.

The case of halves edit

While this definition of factorial (the integral definition) fits all of the properties of the factorial for integers, one can see that plugging in non-integer values could also make sense. One could examine the case of 1.5 factorial:

 

One can apply Integration by parts.

 
 
 

One can then apply Integration by parts:

 

Using L'Hôpital's rule:

 
 
 

This shows that:

 

In general, one could use Integration by parts and L'Hôpital's rule to show that:

 

for any odd integer k.

The Factorial of a half edit

By the extended definition:

 

Using Integration by parts and L'Hôpital's rule implies that:

 

One can apply Integration by substitution:

 
 

Applying Integration by substitution yields:

 
 
 
sin (y) * x plotted in GNU Plot

Vector edit

In simplest terms, a vector is a mathematical construct that has the ability to be scaled and added to other vectors; Euclidean vectors have a direction and length and is drawn as an arrow. An abstract vector is an element of a vector space.[1][2] That is, vectors are objects that are able to be scaled and added together in special ways. The adding of vectors is commutative and associative, meaning that the order and grouping of the addition of vectors does not matter. Furthermore, the nature in which the vectors are scaled is compatible with field multiplication; this means that scaling a vector by a factor of   then scaling it by a factor of   is the same as scaling the vector by a factor of  . Every vector has an "opposite" vector for addition. That is, when a vector and it's "opposite" vector are added, the resultant sum is the zero vector. Finally, the zero vector is a vector that does not affect other vectors under vector addition. Vectors are a specific kind of the more general (PTENSOR).

Vector space edit

A vector space is any object   that is a set over a field   in which there is vector addition defined, scalar multiplication defined, and meets the eight axioms of a vector space.[3][4][5] Vector addition  , takes two elements in   and assigns them to another another element in  . Scalar multiplication  , takes an arbitrary number in   and element of   then assigns them to another element of  . The eight axioms of a vector space are:

  1. There exists an additive identity element in  ;  
  2. There exists a multiplicative identity scalar in  ;  
  3. There exists an additive inverse for every element in  ;  
  4. The vector addition operation is commutative;  
  5. The vector addition operation is associative;  
  6. The scalar multiplication operation   and field multiplication in   are compatible;  
  7. Scalar multiplication   is distributive over vector addition  ;  
  8. Scalar multiplication   is distributive over field addition in  ;  

Elements in   are referred to as vectors and elements in   are typically called scalars.

Vector subspace edit

An object   is a vector subspace of a vector space   over field   if under the operations of  ,   is a vector space over  .[6][7] Equivalently,   is a vector subspace of the vector space   over field   if for every   and   implies that  .[6] A subspace is nonempty, contains the zero vector, and is closed under vector addition and scalar multiplication. Every vector space is a vector subspace of itself. This is useful as it is sometimes a lot faster to show that   is a vector space via showing that it is a vector subspace of itself rather than showing the entire eight axioms of a vector space. This works as every vector subspace is clearly itself a vector space. If   is a vector subspace of  , some authors denote this as  .

Dual vector space edit

Given a vector space   over field  , the dual vector space of  , denoted  , is the collection of all linear mappings  .[8][9][10][11][12]

Example use of dual vector space in spacetime edit

Consider a vector space   and its dual space  . One can describe spacetime by a tensor in   (two co-variant indices) over a four-dimensional differentiable manifold  . This captures the idea of the metric tensor,   in Einstein's field equations:  . The dual space represents the two co-variant indices of the metric tensor. This abstract dual space finds use in describing gravity.

Euclidean vector edit

A Euclidean vector is an object that has both magnitude and direction. This definition of a "vector" may be more quick and intuitive for many applicable uses of vectors such as physics, geometry, et cetera. Its use may be attributed to the fact that Euclidean vectors may be thought of as arrows and form an elementary basis for subjects such as vector calculus. As for formal definitions of a Euclidean vector, there are many, but all capture the idea of a Euclidean vector encoding the idea of distance in some way. There will be both abstract and more applied definitions listed.

Definition using the inner product edit

A Euclidean vector is a member of a Euclidean vector space. A Euclidean vector space is a vector space that is paired with an inner product between vectors that captures distance and meets four axioms.[13] A Euclidean vector space is a finite-dimensional vector space   over the field of the real numbers   with an inner product operation   that meets four axioms[13]:

  1.  
  2.  
  3.  
  4.   and  

  represents the additive identity vector (zero vector) in the vector space.

Definition by orthonormal basis vectors edit

A Euclidean vector is a linear combination of orthonormal vectors.[14] Orthonormal vectors are vectors that are orthogonal to each other and have unit length. Two vectors,   and  , are orthogonal if their inner product is zero:  . If one were to take the orthogonal vectors   and   and divide them both by their respective magnitudes,   and  , the vectors   and   are orthonormal vectors. Given a set of orthonormal vectors  , a linear combination of these vectors is of the form:   where   are real numbers.[14] Using sum notation, a Euclidean vector is of the form:  . Equivalently in Einstein sum notation, a Euclidean vector is of the form:  . The sum index is a superscript on   in this sum notation to indicate that it's a contra-variant index.

Definition using cartesian products edit

A Euclidean vector is a member of a cartesian product over the reals.[15] Let   be a set defined by:  ; a member of such a set may be called an n-dimensional Euclidean vector.

The empty set edit

The empty set,  , is not a vector space as, since it contains no elements, it doesn't contain an additive identity element or multiplicative identity element. Thus, by definition of a vector, in order for an object to be a vector, the object has to exist ("nothingness" is not a vector).

References edit

  1. ^ Nearing, James. Vector Spaces. University of Miami. http://www.physics.miami.edu/~nearing/mathmethods/vector_spaces.pdf
  2. ^ Breezer, Robert A. A First Course in Linear Algebra. University of Puget Sound. Dec 30, 2015. http://linear.ups.edu/html/section-VS.html
  3. ^ Thrall, Robert M; Leonard, Tornheim (January 15, 2014). Vector Spaces and Matrices. Courier Corporation. p. 10. ISBN 0486321053.
  4. ^ Schaefer, H.H.; Wolff, M.P.; Wolff, Manfred P.H. (June 24, 1999). Topological Vector Spaces. Springer Science & Business Media. p. 9. ISBN 0387987266.
  5. ^ Greenberg, Ralph. Vector Spaces. University of Washington. https://sites.math.washington.edu/~greenber/VectorSpaces.pdf
  6. ^ a b Thrall, Robert M; Leonard, Tornheim (January 15, 2014). Vector Spaces and Matrices. Courier Corporation. p. 24. ISBN 0486321053.
  7. ^ Narici, Lawrence; Beckenstein, Edward (July 26, 2010). Topological Vector Spaces. CRC Press. p. 8. ISBN 1584888679.
  8. ^ Carrell, James B. (September 2, 2017). Groups, Matrices, and Vector Spaces: A Group Theoretic Approach to Linear Algebra. Springer. p. 232. ISBN 038779428X.
  9. ^ Schaefer, H.H.; Wolff, M.P.; Wolff, Manfred P.H. (June 24, 1999). Topological Vector Spaces. Springer Science & Business Media. p. 24. ISBN 0387987266.
  10. ^ Thrall, Robert M; Leonard, Tornheim (January 15, 2014). Vector Spaces and Matrices. Courier Corporation. p. 160. ISBN 0486321053.
  11. ^ Vorobets, Yaroslav. MATH 423 Linear Algebra II Lecture 31: Dual space. Adjoint operator. Texas A&M University. https://www.math.tamu.edu/~yvorobet/MATH423-2012A/Lect3-05web.pdf
  12. ^ Yan, Min. Dual Space. The Hong Kong University of Science and Technology. http://algebra.math.ust.hk/vector_space/12_dual/lecture1.shtml
  13. ^ a b Hall, Frederick Michael (1966). An Introduction to Abstract Algebra. CUP Archive. p. 274.
  14. ^ a b Shilov, Georgi E.; Silverman, Richard A. (December 3, 2012). An Introduction to the Theory of Linear Spaces. Courier Corporation. p. 143. ISBN 0486139433.
  15. ^ Naber, Gregory L. (August 29, 2012). Topological Methods in Euclidean Spaces. Courier Corporation. p. 6. ISBN 0486153444.

Category:mathematics


Misc. edit

Wikipedia won't let me upload the image... but remember to donate to Wikipedia: https://imgur.com/a/HiofBVY