Hypercupolae

edit

Hi!

I anwsered to you on Talk:Cupola_(geometry)!

See you later! Padex (talk) 17:21, 7 August 2009 (UTC)Reply

Tensor field

edit

Yes, the introduction seems to be an improvement. I'm still not entirely sure I know what it is, but I'm less unclear about how unclear I am now. ;-)- (User) Wolfkeeper (Talk) 03:20, 4 September 2009 (UTC)Reply

Jagermeister

edit

I love the IPA and use it. The issue is its appropriate use in appropriate contexts.

So I must not ignore other editors, but they may ignore me. How does that work?

By the way, the correct idiom is "free rein," not "free reign." Wahrmund (talk) 23:10, 10 November 2009 (UTC)Reply

Merely etymological

edit

"and the distinction of <v> from <f> for medio-final /v/ to become merely etymological."

What do you mean? That the distinction is only made in the spelling and no longer in the language? Also, there was a medial distinction between /f/ and /v/ ('sævar' vs. 'sofa') but there wasn't any final distinction, was there? Haukur (talk) 23:51, 28 November 2009 (UTC)Reply

Yes, that is precisely what I mean. Vafl, for example, would have been pronounced /ˈvɑvl/ after the merger, and still as /ˈfɑː/. /f/ only occurs initially, and is otherwise /v/, so it's an initial vs. medio-final allophonic relationship, like that of ð. Sævar would be pronounced /ˈsæː ˌwɑɾ/ before the merger, but afterwards would become like an endingless sæfari, before and after pronounced /ˈsæː ˌvɑɾ i/. LokiClock (talk) 00:48, 29 November 2009 (UTC)Reply
All right, I would phrase that as "the distinction between medial /v/ and medial /f/ disappeared, though the distinction is made in normalized spelling". But note that, sæfari is a bit different - it's a compound word so the 'f' is pronounced /f/ in any time period. Haukur (talk) 11:16, 29 November 2009 (UTC)Reply
No, there is no medial /f/, except in compounds. There's medial <f>. The sound represented by <v>, /w/, merged with the sound represented by medial <f>, /v/, so that /v/ was confined to being a medial allophone of /f/. I failed to note that it was a compound word, but you get my point, anyway. LokiClock (talk) 20:03, 29 November 2009 (UTC)Reply
Sure, if you want to slice it that way. Note that if I've commented on your talk page I'll be watching it, you don't need to specifically notify me of replies over on my talk page. Haukur (talk) 20:42, 29 November 2009 (UTC)Reply
Alrighty, then. If you have any suggestions on how I could word that better, please let me know, or just clarify the text yourself. LokiClock (talk) 20:48, 29 November 2009 (UTC)Reply

Hnefatafl

edit

Thanks! Briangotts (Talk) (Contrib) 14:51, 30 November 2009 (UTC)Reply

Your recent edits

edit

  Hello. In case you didn't know, when you add content to talk pages and Wikipedia pages that have open discussion, you should sign your posts by typing four tildes ( ~~~~ ) at the end of your comment. You may also click on the signature button   located above the edit window. This will automatically insert a signature with your username or IP address and the time you posted the comment. This information is useful because other editors will be able to tell who said what, and when. Thank you. --SineBot (talk) 09:04, 1 December 2009 (UTC)Reply

First Grammatical Treatise

edit

I didn't remove any references, I merely moved the speculation about Þorodd out of the first paragraph of the article... AnonMoos (talk) 03:56, 27 December 2009 (UTC)Reply

By the way, you were the one who first added the name "Þorodd" to the article (see http://en.wikipedia.org/w/index.php?title=First_Grammatical_Treatise&action=historysubmit&diff=328491616&oldid=327953691 ), so if you don't know where that theory comes from, then maybe it should be removed from the article. AnonMoos (talk) 03:59, 27 December 2009 (UTC)Reply
Well, Einar Haugen conspicuously refrained from endorsing any Thorodd theory, so I strongly doubted that it could be the clear mainstream scholarly consensus unless there was some recent discovery... AnonMoos (talk) 17:29, 27 December 2009 (UTC)Reply
Einar Haugen's book is already listed on the First Grammatical Treatise article page. AnonMoos (talk) 21:41, 27 December 2009 (UTC)Reply
Well, I'm probably going to the University library here fairly soon, so I'll take another look at the Haugen book. However, since you're the one who added the name Thorodd to the article, the general burden for providing documentation on the Thorodd hypothesis would appear to fall on you... AnonMoos (talk) 14:45, 28 December 2009 (UTC)Reply

See Talk:First Grammatical Treatise. AnonMoos (talk) 23:57, 28 December 2009 (UTC)Reply

 
Hello, LokiClock. You have new messages at Sławomir Biały's talk page.
You can remove this notice at any time by removing the {{Talkback}} or {{Tb}} template.

Manual of style

edit

Please refer to WP:MOSMATH. Periods go inside the math tags. Sławomir Biały (talk) 02:07, 31 December 2009 (UTC)Reply

Reply: Old talk: "Danish tongue" comment

edit

Huh? Sorry, but those comments are old, and I'm really not sure which comment you´re referring to. But I cannot see any claim from me that dansk tunga wasn't used about the Danish language as it was spoken then. Icelandic court documents from the 10th century referred to dansk tunga and Snorrí also used the term dansk tunga. I'm perfectly aware that the language spoken in the north-germanic areas was called danish by it's speakers (tongue being synomym to language as in Icelandic tungumál, Danish tungemål). As far as I can see I'm only discussing the correct use of danish tongue and whether it can be used about Old Norwegian and Old Icelandic if the term Old Norse only refers to those dialects. Of course you may think of a completely different comment, in which case my reply is pure nonsense ;) Dylansmrjones (talk) 21:45, 2 February 2010 (UTC)Reply

Barnstar

edit
  The Original Barnstar
I award LokiClock this barnstar for his enthusiastic contributions to articles on the Old Norse language. Haukur (talk) 20:24, 7 February 2010 (UTC)Reply

Stubbing

edit

Please don't add {{stub}} tags to articles such as Málaháttr when they already have a subject-specific stub tag. And when you're adding stub tags, please put them at the end, as per WP:LAYOUT, not at the start. Thanks. PamD (talk) 22:11, 21 April 2010 (UTC)Reply

Sorry about that, I didn't see. Also, I didn't know that (obviously) about stubs at the end, so thanks. ᛭ LokiClock (talk) 22:49, 21 April 2010 (UTC)Reply

C++0x

edit

This is a courtesy note to let you know that the C++0x#Criticisms section you added is still empty, and that the discussion on the talk page may result in it being deleted on Jun 18 2010, if there is no sourced content in it at that time. --Sacolcor (talk) 15:28, 4 June 2010 (UTC)Reply

Old Norse topics

edit

G'day, I've noticed that you're quite active on Old Norse articles, so I thought you might be interested in this template. Feel free to add, remove, or rearrange things. Hayden120 (talk) 05:31, 17 June 2010 (UTC)Reply

I have marked you as a reviewer

edit

I have added the "reviewers" property to your user account. This property is related to the Pending changes system that is currently being tried. This system loosens page protection by allowing anonymous users to make "pending" changes which don't become "live" until they're "reviewed". However, logged-in users always see the very latest version of each page with no delay. A good explanation of the system is given in this image. The system is only being used for pages that would otherwise be protected from editing.

If there are "pending" (unreviewed) edits for a page, they will be apparent in a page's history screen; you do not have to go looking for them. There is, however, a list of all articles with changes awaiting review at Special:OldReviewedPages. Because there are so few pages in the trial so far, the latter list is almost always empty. The list of all pages in the pending review system is at Special:StablePages.

To use the system, you can simply edit the page as you normally would, but you should also mark the latest revision as "reviewed" if you have looked at it to ensure it isn't problematic. Edits should generally be accepted if you wouldn't undo them in normal editing: they don't have obvious vandalism, personal attacks, etc. If an edit is problematic, you can fix it by editing or undoing it, just like normal. You are permitted to mark your own changes as reviewed.

The "reviewers" property does not obligate you to do any additional work, and if you like you can simply ignore it. The expectation is that many users will have this property, so that they can review pending revisions in the course of normal editing. However, if you explicitly want to decline the "reviewer" property, you may ask any administrator to remove it for you at any time. — Carl (CBM · talk) 12:33, 18 June 2010 (UTC) — Carl (CBM · talk) 13:29, 18 June 2010 (UTC)Reply

Old Norse orthography edit

edit

Why did you revert my addition of <ǿ> to the transcription list? My authority for stating that this is a modern, scholarly way of representing /ø:/ in Old Norse is Terje Spurkland: "Innføring i norrønt språk", Universitetsforlaget, 9th edition (2007). This is the standard textbook in Old Norse at the University of Oslo. From a typographical point of view, I too prefer <œ>, but that doesn't change the fact. Devanatha (talk) 16:31, 6 July 2010 (UTC)Reply

Note that the column in the table was renamed after that edit to "Standard Normalized Spelling". Besides scholarly normalization, you encounter a specific normalized orthography, Standard Normalized Spelling, which is not flexible to the needs of individual texts like an arbitrary normalized spelling, and does not live up to modern standards of Old Norse normalization. I prefer ǿ myself, chiefly for consistency and connection with The First Grammatical Treatise orthography. ᛭ LokiClock (talk) 02:46, 8 July 2010 (UTC)Reply
You see also at Old Norse orthography a "Standard Normalized spelling" column which perhaps should be accompanied by "First Grammatical Treatise orthography" and "General normalized spelling" columns for a fuller account of discrete orthographic norms. ᛭ LokiClock (talk) 02:50, 8 July 2010 (UTC)Reply
Please accept my apology. I realise now that my editing behaviour in this case was highly questionable. I did not intend to start a flame war, and I can only blame this on my lack of Wikipedian experience (though I've had an account for several years, I've never been a prolific contributor). I'll revert my edit. I'd also like to thank you for your outstanding work on the Old Norse article, it's good that Anglophones take an interest in the classical form of the Nordic languages, as we sadly neglect it ourselves. Devanatha (talk) 21:14, 8 July 2010 (UTC)Reply
It's no big deal. Thanks for being swell and reasonable. Also I'd like to say that the tables on the Old Norse page are a bit old and misrepresentative to various degrees. I'm trying to make replacements on Old Norse orthography, but I haven't had time to do more than the vowels. ᛭ LokiClock (talk) 04:22, 9 July 2010 (UTC)Reply

IPA removal from Wylie transliteration

edit

IMO Putting Wylie and IPA together like this tends to confuse the difference between a transliteration system (Wylie) and a transcription (IPA) system. The IPA transcription of the sounds of the isolated Tibetan consonants more properly belongs in the article on the Tibetan script ~ perhaps we should insert the IPA transcriptions there and provide a link?. In his original article on the system Turrell V. Wylie did not detail the sounds of the letters - though he did mistakenly call the system he outlined Tibetan transcription. Can we just put the IPA in the article on Tibetan script and then remove it in the Wylie transliteration article? Chris Fynn (talk) 10:11, 29 August 2010 (UTC)Reply

Old Norse orthography

edit

Why did you remove this table from Old Norse orthography? Although the orthography of Old Norse was not completely consistent, the table still gave a general idea of the consonants and the Latin graphemes that were used to represent them. This is not covered anywhere else in the article – only vowels are covered. Thanks, Hayden120 (talk) 06:15, 18 October 2010 (UTC)Reply

I removed it because the "Old Norse alphabet" doesn't exist. But I didn't take into account the consonants, so I'll put it back up until the consonant tables are created. ᛭ LokiClock (talk) 16:02, 18 October 2010 (UTC)Reply

Old Norse tables

edit

I responded on my talk page. Benwing (talk) 10:16, 17 May 2011 (UTC)Reply

set/sit umlaut

edit

Hi, regarding Germanic umlaut#Morphological effects, I still don't understand the umlaut progression from sit to set. In fall to fell I think I understand that fall took a suffix which caused the /ɔ/ (or a different vowel in the past tense?) to front to /ɛ/, after which the suffix disappeared. But in sitset, since sit is already fronted, what back vowel got fronted? Maybe just a little more detail would clarify it for me. Also, could you put in some more detail about the manmen progression -- was man originally /man/ and not /mæn/, so that /a/ got fronted to /ɛ/? Thanks. Duoduoduo (talk) 23:40, 2 July 2011 (UTC)Reply

I changed the description to from a past tense form. They were not derived from "sat" or "fell" directly, but use the same ablauts of the original past tenses, the as in *fefall and *sat respectively. I also changed the separate links of causative weak verb to a section link, which gives exact forms for reconstructions and the umlauting suffix. Note that the fe in *fefall, and thus fell, does not come from umlaut, but reduplication (see: wikt:Category:Proto-Germanic class 7 strong verbs). Yes, man was not front before the Great Vowel Shift. ᛭ LokiClock (talk) 00:56, 3 July 2011 (UTC)Reply
Thanks! Duoduoduo (talk) 00:13, 4 July 2011 (UTC)Reply
Sure thing. ᛭ LokiClock (talk) 02:49, 4 July 2011 (UTC)Reply

topological collapse

edit

I see you've recently made changes to collapse(topology). If t is a face of s, t has already two cofaces if t and s are distinct. Should the definition read t is a free face os s if t and s are the only cofaces of t in the complex? Thanks132.236.54.92 (talk) 19:17, 3 December 2011 (UTC)Reply

t is not a coface of itself, because the faces of an object are 1-lower-dimensional objects. If s is a triangular coface of t, t is an edge. If s is a tetrahedron, t is a triangle. ᛭ LokiClock (talk) 23:15, 3 December 2011 (UTC)Reply

Understanding check on tensor type

edit

I've copied this request here from Talk:Tensor#Understanding check on tensor type, since it does not relate to the article or its editing.

I have some assertions based on my present understanding of tensors, and if they're wrong, could someone explain why?

  • My understanding is that a tensor with all the indices raised is contravariant and with all the indices lowered is covariant, and that the two tensors are dual   Because a space of one-forms is a vector space, the dual to the all-covariant tensor should be the same as looking at the all-covariant tensor as an all-contravariant tensor in the dual space - each covariant index is contravariant relative to the dual space. So, if you represented each contravariant index by a vector, the covariant version would be that same set of vectors in the dual space. And vice versa - the level set representation of each covariant component is the same after taking the dual tensor and observing it in the dual space.
Going on this I imagine raising and lowering indices as looking at pieces of an same object being pushed through a door between the collective vector and dual spaces, and when all the pieces are to one side or the other it looks the same from that side of the door (the two products of all non-dual spaces). ᛭ LokiClock (talk) 22:50, 19 December 2011 (UTC)
  • Since the object itself isn't altered by a change of basis, only its matrix representation, lowering a component   and then applying a rotation will show the contravariant component moving against the rotation, and the covariant component moving in advance of the rotation (or is it equal?), but when you raise the index again it will be the same as if the rotation was applied with both indices contravariant  LokiClock (talk) 22:50, 19 December 2011 (UTC)

You need to distinguish between finding the dual of a vector, and raising and lowering indices. They are not the same.

  • Given the vector space V, the dual (covector) space V* is defined as a linear map from the vector space V onto scalars K (or ℝ if you prefer). This definition alone does not allow pinpointing a specific covector for any particular vector: there is no dual vector, only a dual vector space. Given a basis for V, say the set of vectors ei, we can find a corresponding basis for V* that we call ei such that ei(ej) = δij, the Kronecker delta. You will notice that of we find a new basis fi = kei, then we have for the duals that fi = (1/k)ei. So there is not a linear relationship, but more like an inverse relationship. Raising and lowering indices is a linear relationship.
  • Taking this further, the dual of the dual, (V*)*=V** is a linear map from V* to the scalars, and thus not technically the same space as V. However, there is a unique (and hence natural) bijective map V**V that allows us to identify V with V**, thus allowing us to define vectors to map covectors to the same scalar as the reversed operation: ei(ej)=ej(ei). So we simply define a vector as a co-covector.
  • Very important to understanding this: This is all without any reference to a metric: this all happens without defining the length of a vector.
  • You cannot have more than two of each index in a term, so you need to introduce an index with a different name in your last equation.
  • Once you have this distinction, you can look at introducing a metric (a bilinear form V×VK) and what that allows one to do, for example mapping vectors onto covectors (raising and lowering indices).

Quondumtc 05:51, 20 December 2011 (UTC)Reply

Okay, so you can take that unique map, and take a covector and multiply it by a vector twice. The first application of the vector will cancel the stretching of the components performed by the change of basis, and the second will, in all, turn the covector into a vector. However, because we don't yet know the lengths of the two vectors, the cancellation of the change of basis coefficients is not a property of a specific combination of a covector and vector, but will work for any of them - that they cancel does not identify the vector with the covector.
To identify them based on the scalar they produce by combination, we have to cancel out the effect their lengths have on the Kronecker delta, which in the case where the inner product is the dot product means dividing the result by the two vectors' lengths and only identifying them if they equal 1. And for any metric, the mapping of two vectors to a scalar is equivalent to defining a covector for the scalar, because in every basis the components have to satisfy the inner product (cancel) through regular matrix multiplication. If we have the vector in one basis, we can find the covector, but we don't need to find the covector every time, because the inverse relationship of the transformation law lets us know what form that covector will take if we forget about it while we manipulate the basis of the vector. ᛭ LokiClock (talk) 19:38, 20 December 2011 (UTC)Reply
I'm sorry, I'm having difficulty following that. I'll try to illustrate the process that can be achieved, which is to get a basis and the dual of that basis: a set of vectors and a set of covectors that together satisfy the duality requirement. The point is that in n dimensions you need n linearly independent vectors before you can determine any of the dual basis covectors. Think of the vector space as an abstract mathematical object; all you know is that it is an n-dimensional vector space, for example the polynomials of order n over the interval [0,1]. Once we have identified a basis, in this example it could be ei = (x)i – the superscript here being a power, not an index – we can express any polynomial in the vector space using n coefficients ai. Now imagine a covector: anything that linearly maps these vectors (the polynomials) onto scalars, perhaps a weighted sum of n specific points in the interval of the polynomial. By solving a linear equation, we can find the weight for each point in each covector εi so that it will produce exactly the corresponsding coefficient of the polynomial it is acting on: εj(aiei) = aj. We can now express any covector as a linear combination of these weight combinations we call εi. The polynomials could have been any other n-dimensional vector space. Am I being clear about the vectors and covectors being abstract vector spaces? The vectors in V are linear combinations of the abstract ei. The coefficients (or "components" as they are rather unfortunately called) are just that: coefficients. A column of coefficients ai does not constitute an element of V, but aiei does. Now change any one of your basis vectors, and every one of the dual basis covectors might change.
When you say "multiply it by a vector twice", I imagine you mean for covector c and vector v, (v(c))v = (c(v))v, which is one acting on the other (or the contracted tensor product) followed by scalar multiplication. Apply this with the basis and its dual, and you get (εj(ei))ej = ei, which is simply mapping the covector basis back onto the original vector basis, the reverse map of getting the dual basis. — Quondumtc 21:16, 20 December 2011 (UTC)Reply
Yes, I am thinking of the vector space as arbitrary and abstract. Let me try more carefully.
The inner product V×V*->K of vijcij is a scalar p. See inner product space#Related products, which supports this notion of the inner product as a product of a covector and vector, not immediately one between two vectors or covectors. vp is then vector. I'm attempting to convert the vector to a covector, change the basis, and convert the changed covector to a vector. My hypothesis is that the vector counterpart to the changed covector is the changed vector. I was assuming that if vp is the vector I expect from combination with the corresponding covector, then I've found my covector. I'm saying my fallacy was that multiple combinations of vector and covector can produce the same scalar. Take for example the dot product over a unit vector and its corresponding covector. Multiplying this by the unit vector will produce the unit again (p=1), but so it will for any covector corresponding to a vector with a length inverse to the cosine of the angle between it and the unit vector. Now I'm assuming that the failure of an inner product to support vector-covector identification on its own boils down to the lack of definition of length - Without an explicit map between vectors and covectors, i.e. a metric, I can't be sure that any vector corresponds to a covector, and thus I can't raise or lower an index.
Now, it seems like the idea that they still correspond after change of basis is flawed by the inverse transformation - if v'i=vik, and v'i=(1/k)vi (the covariant change of basis is a scaling of all basis vectors by k), then the covector corresponding to v'i is (1/k)vi, not kvi. Maybe I'm not interpreting the coefficient in the transformation law properly, because I see this as conflicting with the idea that tensors are independent of their choice of basis, the very fact I'm trying to illustrate with my proposed parallel. Or is that why you're pointing out that raising and lowering an index is linear? That it's a correspondence between vector and covector, but not the notion of covector-vector duality I'm looking for? ᛭ LokiClock (talk) 23:12, 20 December 2011 (UTC)Reply
My question boils down to, is raising and lowering an index an active transformation which I've mistaken for a passive one? ᛭ LokiClock (talk) 23:46, 20 December 2011 (UTC)Reply
We need to be careful with notation. Let's indicate a tensor in bold here, and use indices to indicate literal indexing (i.e. we will not use abstract index notation). Without the bold, we are referring to the tensor's coefficients for a specific basis. And for the time being, the only indexed tensors are our basis vectors and covectors, where we mean that there are n actual tensors; this is the only time bold and indexing should be combined here. Denote the inner product with a dot cv or as a function c(v) and the outer (tensor) product with cv. Use juxtaposition only to denote the scalar multiplication. And stick to the Einstein summation convention: repeated indices give an explicit summation if n entries. You will notice that the only time the order of a product in this notation is significant is in the tensor product. So I would put (being a little pedantic for now):
The inner product V × V* → K of v and c is a scalar p: p = vc = (viei)⋅(cjεvj) = vicj(eiεj) = vicjδij = vici.
Your description of the process is accurate enough. What (vc)v is doing is what you'd expect from this notation with normal vectors: you are performing a projection (and a scaling), thus losing most of the information about c.
The transformation process and the invariance of v may be more obvious when the notation is used carefully:
v = viei = (vi/k)(kei) = v ′ jej.
There is nothing magical here - it is simply expressing the same thing v in terms of two different bases ei and ej. In the general case the scalar k is replaced by a linear transformation (a matrix, and hence its inverse for transforming the coefficients). There is nothing magical about a dual basis any more than there is for an orthonormal basis: any basis for the dual space would do, regardless of what basis we use for the vector space. But just like orthonormal bases, there is no generality lost and it is convenient. The utility of V* arises from its ability to represent the most general possible linear projection of a vector onto a scalar.
I think the answer to your question is that there is no one-to-one vector–covector duality; the duality is only between the vector spaces as a whole. The raising and lowering of indices is entirely related to a natural mapping VV* that is induced by the metric. It is so natural once we have the metric, we can treat the two spaces as the same vector space. In this sense the process becomes passive: we are simply choosing which basis we are expressing our tensor in terms of: ei or εj. The spaces V and V* when we have a metric seem to be kept separate for formal reasons, but indices are raised and lowered without thinking in general, usually considering it as the same tensor. — Quondumtc 07:01, 21 December 2011 (UTC)Reply
Okay, beautiful. Thank you very much for walking me through this! I'm thinking, if the sets of bases for each space are like two charts, the metric is like the transition map between them, making them into a manifold where change of covariant basis eventually overlaps with change of contravariant basis. I don't know if they really overlap, but I'm trying to make a logical continuity between change of basis inside each space and change of basis between spaces.
Now, I still have some issues of notational clarity. You're saying that in Einstein notation, choosing an index is choosing a basis? So the vector is by itself v, and vi and vi are components of the vector in a chosen co- or contravariant basis? Let me try and build on one of your formulas: We've chosen a metric, so that my c and v are intrinsically the same object, aliases used only for visual pairing with the covector and vector forms.
c = ciεi = (kci)(εi/k) = c ′ jεj.
ciεi=viei.
c ′ jεj = v ′ jej.
How do I isolate v ′ j and c ′ j? I may be able to show the components' correspondence but to demonstrate this graphically I'd have to choose the basis (repeatedly). ᛭ LokiClock (talk) 01:41, 22 December 2011 (UTC)Reply

The easy questions first: Isolating the coefficients of a tensor with respect to a specific basis can be done via the dot product with each dual basis vector (this can be done for any basis, for a tensor of any order, with any mix of covariant and contravariant factor spaces). Anyhow, it should be clear that you get different scalar coefficients depending on whether you are dotting with the unprimed or primed basis:

cei = (cjεj)⋅ei = cj(εjei) = cjδji = ci.

This feels a little odd, each basis vector having its specific counterpart covector in the dual basis, yet there is no such thing as duality of standalone vectors in this sense. Also not to be confused with a myriad independent senses in which the term "dual" is used.

Yes, in Einstein notation the scalars with indices are the coefficients (or as everyone but me calls them them, components) for a specific basis. When you are dealing with numerical values, you have to make an actual choice of basis, but when you are dealing symbolically the actual choice can remain unspecified. This is complicated by the use of abstract index notation and the fact that usually there is no need to be clear whether this is being used, and you can even mix them freely if desired. They even look identical, except for a subtle indicator such as the use of Latin vs. Greek indices. The down side is potential confusion if you do not know which is intended, and there are times when you must be specific. So you could consistently write v = vb = vβeβ, where the Latin b (the abstract index notation) may be interpreted as an implied eb multiplied (via tensor product) with the term, and duplication of an abstract index implies contraction, not summation. Because the notation is so well-behaved, one generally does not have to worry whether an equation is as per Eintein (and hence dealing with scalars), or abstract (and hence representing the tensors themselves). I would not be surprized if many physicists are not clear on the distinction between the two notations.

I'll try to sketch typical transformations of the basis in a metric space. Imagine you have a vector basis and its covector dual. Being in a metric space allows us to refer to rotation, angles and length. Also, we can picture both the bases in the same space. Any collective rotation of the basis is matched by the same rotation, in the same direction, of the dual basis. Any collective scaling of the basis is matched by an inverse scaling of the dual basis. Any distortion of the basis (e.g. a scaling on one axis) is matched by the inverse distortion (an inverse scaling on the same axis), so if the angle between two basis vectors is reduced, the angle between duals increases. As to overlap between the basis and its dual, this occurs with an othonormal basis and its dual in a Euclidian space. It never occurs for an indefinite or a negative definite metric. In these cases, you can still get the dual being identical to the basis except for the appropriate number of its vectors being of the opposite sign. It is usual (at least in physics) to broaden the term orthonormal to allow these cases, where the basis vectors are orthogonal but the sqaure of these vectors may be in the set {+1, 0, −1}, thus allowing an "orthonormal" basis for any metric. — Quondumtc 07:31, 22 December 2011 (UTC)Reply

Okay, I just out about the swapping of upper and lower index between abstract index notation and Einstein notation, so the basis decomposition formulae are more consistent with my understanding of linear combination.
If there's no sense of a vector in a basis corresponding to a covector in its dual basis, how can we say that changing to the dual basis is a passive transformation? I can understand if the co/vector is essentially untyped in a metric space, and only has covariant or contravariant components depending on the basis, but once used as coefficients for their respective basis, the objects should be the same again. Plus, I'm not sure if this fits with a tensor space as a product space, unless the product with the dual space is itself once they have a metric, not just that the space and its dual are the same.
The basis overlap comment wasn't about an overlap between the basis and its dual basis, but an overlap in the set of bases for the spaces once the spaces are unified, so that change from covariant to contravariant basis is the same as some strictly covariant change of basis. ᛭ LokiClock (talk) 23:54, 22 December 2011 (UTC)Reply
Don't misunderstand me; I was trying to make the distinction between dual vectors in a basis and no dual for an arbitrary standalone vector that is not part of a set of n vectors that consitute a basis. For a single vector you have an (n−1)-dimensional subspace of the covector space that can be the covector to a given vector; which specific covector it turns out to be is determined by the n−1 other vectors you choose to build the basis from. The component decomposition in terms of a basis is the same as linear decomposition of any linear system: you have a set of synthesis functions and a corresponding basis of analysis functions. Just like any transform: the analysis functions are the covector basis and the synthesis functions are the vectors. Anyhow, this seems to be rather a digression.
I'm not too sure what you mean by active and passive transformations. As far as I'm concerned, changes between bases is not doing anything to the object you are transforming the components of: you are still representing the same object; what basis you choose to do so in terms of with a set of coefficients is immaterial. In a metric space, whether this is called covariant or contravariant becomes moot; the terms now only have significance with respect to what you label as your "vector" basis. You could just as easily use the vectors of your covector basis as your vectors with no effect. The only importance of having two bases of this nature then is to express the metric of the space.
I'm a little lost by your comment about a product space; the tensor space of a given order is not closed under any product, if that us what you're talking about. The order of the tensor product is the sum of the orders; contraction reduces the order by 2. So the two products of vectors are the scalar and order-2 tensors. The contraction requires the metric. — Quondumtc 05:04, 25 December 2011 (UTC)Reply
Right, like when they change basis and none of the vectors change relative angle or distance, the dual basis sees no inverse effect. Taking the dual transmits contextual information, so it wouldn't be well-defined for taking the dual of any single object. So if I have two vectors, is there an n-2 dimensional subspace that can be the dual of the set of vectors, or maybe it reduces each vector's corresponding covector subspace separately? Do we know the shape the subspace will have for some set of vectors?
If you have enough context to take the dual of the basis, why couldn't you piggyback the vector or covector onto the basis when you take the dual? If the operation's fully determined for a basis, it should be fully determined for a vector in a known basis.
I don't see what the metric is doing that allows you to still use the covariance and contravariance that specifies which space contributed an object, while treating the spaces as indistinct by making the phenomenon of contravariance and covariance shallow.
I followed for treating V×V* as V×V after the metric, but not that the product of those two spaces is treated as if no complexity was added by the product. You allow a higher order of object, (1,1) tensors, but the (1,0) & (0,1) objects aren't orthogonal sets. They're redundant images of the same set. ᛭ LokiClock (talk) 08:14, 26 December 2011 (UTC)Reply
Each vector in the vector basis provides a one-D constraint on each of the dual vectors, producing a "flat" hyperslice oc covectors. The mapping must be either to 1 or to 0. So the first vector constrains each covector to an (n−1)-D space. The covectors are simply at the intersection of all the constraints for that covector. This means that the dual for an additional vector does not really make sense – what constraints would apply? The only sensible interpretation would be as though it were yet another basis vector, and the result is overconstrained: it has no solution.
The covariance and contravariance, when you have a metric, is still mathematically convenient, even though it is no longer necessary. It saves having to remember when to negate the square of a component. This convenience applies in Minkowski space and particularly in curved manifolds, where it is impossible to choose coordinates so that the metric takes a simple form globally. — Quondumtc 10:50, 26 December 2011 (UTC)Reply
Where can I learn to take a dual basis? Useful is one thing, but I don't see how it still exists. The covariant and contravariant objects are the same, yet they can still be distinguished by that property. ᛭ LokiClock (talk) 12:06, 26 December 2011 (UTC)Reply
It is not the object that is covariant or contravariant; it is the basis and the coefficients. Although, without a metric, there seems to be a definite difference in what you can do with objects from the vector or covector space. To learn about Covariance and contravariance of vectors, start with a normal 2-D vector space with a Euclidean metric. Start with the normal orthogonormal x–y basis, and express some fixed vector in terms of that basis. Then choose another basis that varies in the obvious ways: increase the length of the basis vectors, and find the coefficients required to express the fixed vector. Find the dual basis: those (co)vectors that when dotted with the basis in every combination produce the identity matrix. Use it to find the coefficients of the fixed vector. Change the angle of one of the basis vectors. See how the covector basis changes, and how the fixed vector's coefficients change for the new basis. If you're feeling energetic, play with three dimensions, though you'll get the idea from 2-D. Then play with a space with a Minkowski metric: ∆s2 = ∆x2∆y2. Here there is no orthonormal basis, only orthogonal bases in which one basis vector squares to +1 and the other to −1. Reading about the respective aspects will help. — Quondumtc 15:58, 27 December 2011 (UTC)Reply

Undo request

edit

If you want to undo the edit you can click "View history", then click on "00:22 5 December 2011, then click on "Edit" and then on "Save page". (Unless I have misunderstood what you wanted or the wiki software has been changed in some way I am not aware of.) - Haukur — Preceding unsigned comment added by 157.157.183.55 (talk) 12:49, 24 January 2012 (UTC)Reply

Ah, thank you. Sometimes I see that warning and I forget it's possible. ᛭ LokiClock (talk) 12:54, 24 January 2012 (UTC)Reply

On Old Norse

edit

I'll respond on my talk page.--91.148.159.4 (talk) 13:55, 24 January 2012 (UTC)Reply

I have now responded, but I will copy the discussion to the article talk page, which is a more appropriate place for it.--91.148.159.4 (talk) 14:23, 24 January 2012 (UTC)Reply

Your enhancement to isomorphism

edit

Hey, I really like your addition, distinguishing between "we share all properties" and "our structures share all of their properties." That's a very nice way to explain the difference between equality and isomorphism.—PaulTanenbaum (talk) 15:17, 17 February 2012 (UTC)Reply

Thank you very much! I think it's what makes isomorphism deep, so I'm happy to hear that. ᛭ LokiClock (talk) 15:35, 17 February 2012 (UTC)Reply

By the way, I don't suppose it's surprising that a person who (a) describes himself as liking to learn and think about languages should also (b) be intrigued by isomorphism. At least that's the way it appears to this correspondent who loves learning and thinking about languages and is intriuged by isomorphism.—PaulTanenbaum (talk) 18:42, 17 February 2012 (UTC)Reply

"All automorphisms of an Abelian group commute" --LokiClock

edit

[1] False. Let A = T ⊕ T where T is an arbitrary abelian group with at least one non-zero element of order ≠ 2. Let φ (x, y) = (y, x) and ψ (x, y) = (−x, y). Then φ∘ψ (x, y) = (y, −x) , but ψ∘φ (x, y) = (−y, x) . Ironically, two products with different order have exactly opposite signs, (i.e. anticommutator of these is zero, not commutator). Incnis Mrsi (talk) 16:46, 7 March 2012 (UTC)Reply

Thank you. There was a hastily read definition behind that. ᛭ LokiClock (talk) 00:11, 8 March 2012 (UTC)Reply

Bach, Cotton, Lanczos, Schouten tensors

edit

Hi LokiClock. I noticed you are one of few active editors to have meaningfully edited the Bach tensor article. I've recently made some major changes to the Lanczos tensor article. Perhaps you'd be interested in improving it or know someone else who is. Similar pages that could use some attention are Cotton tensor and Schouten tensor. Teply (talk) 23:41, 13 October 2012 (UTC)Reply

I will probably not be able to contribute significantly to those articles. I was just reading random JSTOR previews and noticed the statement I added. ᛭ LokiClock (talk) 01:34, 20 October 2012 (UTC)Reply

talkback

edit
 
Hello, LokiClock. You have new messages at The Great Redirector's talk page.
You can remove this notice at any time by removing the {{Talkback}} or {{Tb}} template.

Syntactic monoid (clarification)

edit

Hi! Thanks for your edit of Syntactic monoid#Syntactic equivalence. However, you changed the symbol of "left syntactic relation" (2nd def.), while the clarification request complained about "right syntactic equivalence" (1st def.) and "syntactic congruence" (3rd def.) looking the same. Therefore, in the new version, they still look the same. Maybe you intended to change the 1st or 3rd def.'s symbol? - Jochen Burghardt (talk) 15:46, 8 February 2014 (UTC)Reply

I absolutely did, thank you. ᛭ LokiClock (talk) 02:55, 9 February 2014 (UTC)Reply

ArbCom elections are now open!

edit

Hi,
You appear to be eligible to vote in the current Arbitration Committee election. The Arbitration Committee is the panel of editors responsible for conducting the Wikipedia arbitration process. It has the authority to enact binding solutions for disputes between editors, primarily related to serious behavioural issues that the community has been unable to resolve. This includes the ability to impose site bans, topic bans, editing restrictions, and other measures needed to maintain our editing environment. The arbitration policy describes the Committee's roles and responsibilities in greater detail. If you wish to participate, you are welcome to review the candidates' statements and submit your choices on the voting page. For the Election committee, MediaWiki message delivery (talk) 16:33, 23 November 2015 (UTC)Reply

Old Norse IPA help needed

edit

Hi LokiClock. A long time ago I asked for help in rendering a few Old Norse names into IPA and you kindly gave your thoughts (here and here). The thing is that you and User:Nora lives gave differing pronunciations, and I'd rather not mix and match the pronunciations you guys offered. Since Nora hasn't been around for a couple years, I was hoping you could show me in this table how you'd render these names and patronymics in IPA. That would give me a simple and consistent list to work with for articles relating to the 11th-century to 13th-century Kings of the Isles who regurgitated some of these. The last two are Old Norse forms of Gaelic names (the latter appears in Ágrip af Nóregskonungasǫgum [2]).

Name IPA
Lǫgmaðr non
Óláfr Óláfsson non
Haraldr Haraldsson non
Guðrøðr Guðrøðarson non
Rǫgnvaldr Rǫgnvaldsson non
Magnús Magnússon non
Ragnhildr Óláfsdóttir non
Óspakr-Hákon non
Affrica Guðrøðardóttir non
Bjaðǫk non
Bjaðmunjo Mýrjartaksdóttir non

--Brianann MacAmhlaidh (talk) 00:31, 17 September 2016 (UTC)Reply

ArbCom Elections 2016: Voting now open!

edit

Hello, LokiClock. Voting in the 2016 Arbitration Committee elections is open from Monday, 00:00, 21 November through Sunday, 23:59, 4 December to all unblocked users who have registered an account before Wednesday, 00:00, 28 October 2016 and have made at least 150 mainspace edits before Sunday, 00:00, 1 November 2016.

The Arbitration Committee is the panel of editors responsible for conducting the Wikipedia arbitration process. It has the authority to impose binding solutions to disputes between editors, primarily for serious conduct disputes the community has been unable to resolve. This includes the authority to impose site bans, topic bans, editing restrictions, and other measures needed to maintain our editing environment. The arbitration policy describes the Committee's roles and responsibilities in greater detail.

If you wish to participate in the 2016 election, please review the candidates' statements and submit your choices on the voting page. MediaWiki message delivery (talk) 22:08, 21 November 2016 (UTC)Reply

You have been pruned from a list

edit

Hi LokiClock! You're receiving this notification because you were previously listed at Wikipedia:WikiProject History/Outreach/Participants, but you haven't made any edits to the English Wikipedia in over 6 months.

Because of your inactivity, you have been removed from the list. If you would like to resubscribe, you can do so at any time by visiting Wikipedia:WikiProject History/Outreach/Participants.

Thank you! Message delivered to you with love by Yapperbot :) | Is this wrong? Contact my bot operator. | Sent at 18:00, 1 July 2024 (UTC)Reply