User talk:Ozob/Archive 1

Latest comment: 15 years ago by 69.116.88.176 in topic Thank you...
Archive 1Archive 2Archive 3Archive 5

Thank you...

...for your improvements to the problem of Apollonius! :)

Welcome to Wikipedia, by the way! I hope that you like it here, and if I can help you, I'll do my best. I like your name; perhaps it's Bozo spelled backwards? ;) A friendly hello from Willow (talk) 23:22, 14 May 2008 (UTC)


....for helping me understand partitions and riemman integrals... —Preceding unsigned comment added by 69.116.88.176 (talk) 23:40, 14 July 2009 (UTC)

Empty product

I appreciate the compromise at empty product. The difficulty is that the material about 0^0 isn't particularly relevant to the article on empty products, except for the case where 0^0 is viewed as an empty product. That's why all the material you added is already in the exponentiation article, with a pointer from the empty product article. I hope you'll look through the talk archives of those pages to see that Bo Jacoby has made the same arguments before, but didn't find consensus. The previous round of discussion about the topic is what led to the consolidation in the exponentiation article. — Carl (CBM · talk) 22:06, 12 June 2008 (UTC)

Hi Carl,
Yes, I agree that the material's not relevant. I did read part of the talk page, and I agree that there's no consensus. And I, too, would prefer that the material be on the exponentiation page. But I saw a war starting, and I was hoping I could pacify both sides. I actually like the article better the way you've left it (because, as you say, the article is about empty products and not about 0^0); but I thought that by including the other side's arguments, I would make everyone happy. I just want the discussion to be peaceful. Ozob (talk) 17:49, 13 June 2008 (UTC)

If you feel a discussion is more appropriate somewhere else, the usual thing to do is to direct people there and copy material over. People leave messages to bring matters to community attention so at least Trovatore's request should not have been removed. You can't expect people to watch this page religiously. I almost missed seeing the request because of your removal. In general, you should not remove other people's comments unless they violate some serious policies. Even then, it may be preferable to archive them (see Wikipedia:Talk_page_guidelines). --C S (talk) 23:54, 12 June 2008 (UTC)

Hi C S,
I'm puzzled. Just as you suggested, I left a notice [1] redirecting people to Talk:Empty_product#Moved from WikiProject talk where I had moved the discussion [2]. Trovatore's original comment didn't provide any context except for a mention of Bo Jacoby (I had to look at the article history to know what was going on), so I thought my notice was no worse. So I'm wondering how I might have done better. I see that you would have preferred if I had left Trovatore's original comment, and now that I think about it, that would have been the right thing to do. Do you have any other ideas on better ways to move a discussion? Thanks. Ozob (talk) 17:49, 13 June 2008 (UTC)
Well, as you've figured out, without knowing the context, it's more prudent just to leave things the way they were said. Trovatore could have phrased it better but this is like the umpteenth time, so he probably was too exasperated to do so. To learn about Bo Jacoby you have to go through a lot more than one article history (you can try searching the WP:Math archives). Except for the removal of the original request, moving the other comments wasn't a bad idea. But it's unusual. It's not uncommon for the discussion to sometimes become a little article specific before someone advocates moving the focus of discussion elsewhere. Even then, the comments themselves are not expunged. I personally think it's just best to leave things as they are. The bot will archive them anyway. Also, sometimes people want to make a comment to the general community and not get involved on the article discussion page (which they may not even watch). --C S (talk) 21:01, 13 June 2008 (UTC)
Oh, I've seen Bo Jacoby at work before, and I realize that he's part of the problem. (But I also see User:Michael Hardy, User:JRSpriggs, and User:Kusma on his side.) Counting from the first reply (which was also the first to mention the issue at hand rather than plea for help) at 15:07 UTC to 21:02 UTC when I moved it, it generated ten posts totaling about 7,000 bytes, all of which simply repeated arguments that could be found on the talk page. My real concern was that the thread was pure noise, no signal, and a lot of noise at that (for that page, at least). I felt entirely justified in moving it away, "to a better place", as I put in my edit summary, and I meant that not only in a literal but also a more figurative way. I know that it's unusual, but I strongly feel that it was (and still is) the right thing to do. Ozob (talk) 00:28, 14 June 2008 (UTC)

Apollonius

Hi Ozob, I’ve replied to your points at: Talk:Problem_of_Apollonius#Oldest_significant_result_in_enumerative_geometry.3F

Sorry for the delay!

Nbarth (email) (talk) 11:49, 22 June 2008 (UTC)

With thanks

 
Mmmm, gratitude...

I'd like to thank you for your contributions to Emmy Noether. As a total mathematics moron, I feel infinitely indebted to number-smart folks like you and WillowW. I appreciate your support and the many edits you've made. Have a donut. – Scartol • Tok 00:03, 23 June 2008 (UTC)

Emmy Noether

I saw that you have almost entirely reverted an IP's edit, which I'm not that happy about. The person obviously knew what s/he was talking about. Missing citations is bad, but one cannot make everything perfect in one go. Please reconsider your revert. Thank you, Randomblue (talk) 19:13, 30 June 2008 (UTC).

I have discussed this privately with the user over email, and I hope we are both satisfied with the situation. I'm actually looking forward to their contributions, since (as you said) they seem to know what they're talking about. If there's something you'd like to rescue from the edit in question, please go ahead and change the article. I'm not infallible, after all, and as I said initially, I'm probably being a little overprotective. Ozob (talk) 22:31, 30 June 2008 (UTC)

The footnote to Einstein's letter to the New York Times links to a Web page that claims Einstein's letter appeared on May 5, 1935. However, one can retrieve the actual letter from the New York Times archive, and see that the letter appeared in the Times on May 4, 1935. Therefore, the Web page cited in the footnote has the date wrong. I corrected the date recently, but just discovered that my correction was reverted on the grounds that my change was incorrect. I invite you to check for yourself that the letter appeared on May 4, 1935. —Preceding unsigned comment added by 66.108.13.221 (talk) 00:23, 4 July 2009 (UTC)

expert point of view

Hi Ozob. I just wanted to point out that Moon Duchin, one of the sources in the Noether article, has kindly accepted to review the article. I see she has left a message on the talk page today discussing various issues. There is still room for improvement! Best, Randomblue (talk) 10:22, 9 July 2008 (UTC).

RFC at St. Petersburg paradox

As you have contributed to an earlier related discussion at Wikipedia talk:Manual of Style (mathematics)#Punctuation of block-displayed formulae, you may be interested in Talk:St. Petersburg paradox#Request for comments: punctuation after displayed formula.  --Lambiam 18:18, 8 August 2008 (UTC)

Thanks!

Yes, \textstyle{} is much better. Thanks for fixing it up! siℓℓy rabbit (talk) 15:18, 9 August 2008 (UTC)

Common interests, clearly

You're right, we've been chasing same articles all over the place for the past weeks. I just ended up reverting your IPA guide to étale (see the talk page). I accidentally got more involved in fixing articles related to algebraic geometry (just back from vacation so I really should not have time for this!), so it's very good to have some company doing this. I think we've had some good progress on many topics. A lot remains to be done to AG-related topics — one example is just adding the maths rating template to many (majority of) articles on important topics. With things getting busier at work by the day, I'm afraid my contributions will slow down from now on, but let's see. Cheers, Stca74 (talk) 09:13, 10 August 2008 (UTC)


Ozob's proposed deletion of "Non-Newtonian calculus"

The article "Non-Newtonian calculus" provides a brief description about a subject of interest to scientists, engineers, and mathematicians. The omission of this subject from Wikipedia would be a huge disservice to those people. The article is coherent, meaningful, and unbiased. Exactly what do you object to? Shouldn't an encyclopedia contain as much pertinent knowledge as possible? Please reconsider your decision. Thank you.

Sincerely, Michael Grossman —Preceding unsigned comment added by Smithpith (talkcontribs) 19:48, 12 September 2008 (UTC)

Citation, reviews, and comments re "Non-Newtonian Calculus"

"Non-Newtonian Calculus" is cited by Professor Ivor Grattan-Guinness in his book "The Rainbow of Mathematics: A History of the Mathematical Sciences" (ISBN 0393320308). Please see pages 332 and 774.

"Non-Newtonian Calculus" has received many favorable reviews and comments:

The [books] on non-Newtonian calculus ... appear to be very useful and innovative.

                       Professor Kenneth J. Arrow, Nobel-Laureate 
                       Stanford University, USA 

Your ideas [in Non-Newtonian Calculus] seem quite ingenious.

                       Professor Dirk J. Struik 
                       Massachusetts Institute of Technology, USA 

There is enough here [in Non-Newtonian Calculus] to indicate that non-Newtonian calculi ... have considerable potential as alternative approaches to traditional problems. This very original piece of mathematics will surely expose a number of missed opportunities in the history of the subject.

                       Professor Ivor Grattan-Guinness 
                       Middlesex University, England 

The possibilities opened up by the [non-Newtonian] calculi seem to be immense.

                       Professor H. Gollmann
                       Graz, Austria 

This [Non-Newtonian Calculus] is an exciting little book. ... The greatest value of these non-Newtonian calculi may prove to be their ability to yield simpler physical laws than the Newtonian calculus. Throughout, this book exhibits a clarity of vision characteristic of important mathematical creations. ... The authors have written this book for engineers and scientists, as well as for mathematicians. ... The writing is clear, concise, and very readable. No more than a working knowledge of [classical] calculus is assumed.

                       Professor David Pearce MacAdam
                       Cape Cod Community College, USA

... It seems plausible that people who need to study functions from this point of view might well be able to formulate problems more clearly by using [bigeometric] calculus instead of [classical] calculus.

                       Professor Ralph P. Boas, Jr.
                       Northwestern University, USA


We think that multiplicative calculus can especially be useful as a mathematical tool for economics and finance ... .

                       Professor Agamirza E. Bashirov
                       Eastern Mediterranean University, Cyprus/
                       Professor Emine Misirli Kurpinar
                       Ege University, Turkey/
                       Professor Ali Ozyapici
                       Ege University, Turkey


Non-Newtonian Calculus, by Michael Grossman and Robert Katz is a fascinating and (potentially) extremely important piece of mathematical theory. That a whole family of differential and integral calculi, parallel to but nonlinear with respect to ordinary Newtonian (or Leibnizian) calculus, should have remained undiscovered (or uninvented) for so long is astonishing -- but true. Every mathematician and worker with mathematics owes it to himself to look into the discoveries of Grossman and Katz.

                       Professor James R. Meginniss
                       Claremont Graduate School and Harvey Mudd College, USA

Note 3. The comments by Professors Grattan-Guinness, Gollmann, and MacAdam are excerpts from their reviews of the book Non-Newtonian Calculus in Middlesex Math Notes, Internationale Mathematische Nachrichten, and Journal of the Optical Society of America, respectively. The comment by Professor Boas is an excerpt from his review of the book Bigeometric Calculus: A System with a Scale-Free Derivative in Mathematical Reviews.

Thank you.

Sincerely, Michael Grossman —Preceding unsigned comment added by Smithpith (talkcontribs) 01:03, 13 September 2008 (UTC)

Dab page

The dab page is at Janko; why did you redirect the other dab page (Janko_group_(disambiguation)) to the aricle Janko group instead of the correct dab page ? SandyGeorgia (Talk) 00:02, 14 September 2008 (UTC)

"Janko group" on its own is ambiguous. Shouldn't Janko group (disambiguation) disambiguate Janko group rather than Janko?
I agree that the present Janko group article is not a proper disambiguation page, but I'll fix that. Ozob (talk) 00:08, 14 September 2008 (UTC)
Janko group is now a "proper" article, as it should be. Which page is going to be the dab page, and the other (duplicate) dab page should direct to it. SandyGeorgia (Talk) 00:10, 14 September 2008 (UTC)
Janko group is incapable of containing actual content since it could refer to any of four separate objects. I've made it a disambiguation page. Janko group (disambiguation) points there. It's worth pointing out that I created Janko group (disambiguation) because Template:Group navbox pointed to Janko group but clearly intended to refer to all four groups. Ozob (talk) 00:18, 14 September 2008 (UTC)
This was an article. OK, you'll need to sort this with the folks at the Group Math FAC, as the article will now show as incorrectly linked. I did what I could to try to help. SandyGeorgia (Talk) 00:21, 14 September 2008 (UTC)
I have this feeling that we're both trying to do the right thing, and somehow we're not communicating.
I'm not sure what part of the MoS Janko group now violates; it does not give extended definitions as you stated in your edit summary, but only the simplest possible fact which could be used to distinguish the groups (besides their names), namely their order. I'll ask at the Group FAC and we'll sort it out there. Ozob (talk) 00:34, 14 September 2008 (UTC)
I'll wait for you all to sort it; the article was fine and did the job, now it's back to being a dab trying to be an article, and we have Group (mathematics) pointing to a dab, that easily could have been (was) an article. SandyGeorgia (Talk) 00:36, 14 September 2008 (UTC)

Janko

Hi Ozob,

before we are all going mad!!! (<- notice the complete anti-MOS-hly markup), I have replaced the Janko group link in the navbox by all four groups (which is, I believe, worse than having the link to J.gr. itself, but at least compliant to MOS). (I'm so tired by these nitpicking comments at FAC whose sole objective it seems to be to follow the guidelines mm per mm) Jakob.scholbach (talk) 10:06, 14 September 2008 (UTC)

Easy as pi?: Making mathematics articles more accessible to a general readership

The discussion, to which you contributed, has been archived, with very much additional commentary,
at Wikipedia:Village pump (proposals)/Archive 35#Easy as pi? (subsectioned and sub-subsectioned).
A related discussion is at
(Temporary link) Talk:Mathematics#Making mathematics articles more accessible to a general readership and
(Permanent link) Talk:Mathematics (Section "Making mathematics articles more accessible to a general readership"). Another related discussion is at
(Temporary link) Wikipedia talk:WikiProject Mathematics#Making mathematics articles more accessible to a general readership and
(Permanent link) Wikipedia talk:WikiProject Mathematics (Section "Making mathematics articles more accessible to a general readership").
-- Wavelength (talk) 01:38, 29 September 2008 (UTC)

Derivative with respect to a vector

Hi Ozob. On the talk page for Euclidean vector, you wrote that there is a section in the article called:

  • Derivatives with respect to a vector (wrongly labeled the "derivative of a vector")

I don't want to sidetrack the discussion on that page, so I'm asking you here. Why do you believe it is incorrectly titled? Isn't

 

the derivative of the vector v with respect to the scalar t? MarcusMaximus (talk) 06:15, 5 October 2008 (UTC)

It should be "Derivative of a vector-valued function" or "Derivative with respect to a vector". I believe that the section is incorrectly titled because vectors are not functions, and therefore the idea of differentiation is meaningless. Ozob (talk) 01:23, 6 October 2008 (UTC)

What about the concepts of relative velocity as the derivative of displacement, and acceleration as the derivative of velocity? Is the displacement between two points not a vector, but rather a vector function? Is it worth making such a semantic distinction? The text says that the vector is a function of a scalar.

Also, I think it would be incorrect to title it "derivative with respect to a vector", because that is the phrasing typically used to refer to the quantity that appears in the "denominator" part when using the d/dt notation. Certainly there is nothing in there about taking the derivative of something with respect to a vector, in the sense of measuring the rate of change of a dependent function as an independent vector changes. MarcusMaximus (talk) 02:32, 6 October 2008 (UTC)

Displacement between two points is a vector. If one (or both) of those points is variable, then you get a function: Each choice of points determines a unique displacement vector. Functions have derivatives, so this makes sense. But to talk about the derivative of a vector—just a vector, not a vector-valued function—is meaningless. Derivatives are only defined for functions; even when we differentiate a constant (as in (d/dx)(1) = 0) we are really differentiating the constant function.
I think "derivative with respect to a vector" is an appropriate description of a directional derivative. It seems to me that it would be consistent with the usual notation to write the directional derivative in the direction v as d/dv, even though that's not usually done.
I have a question for you: You write d/dt where I would write d/dt. Surely you learned this notation somewhere. Where? The only place I've ever seen an upright d is on Wikipedia. Ozob (talk) 14:53, 6 October 2008 (UTC)

I understand your point about vector functions. In fact, the reference I was using refers to them as vector functions, so I agree that we should change the title to Derivative of a vector function. I've made the change.

The section we are talking about doesn't discuss the directional derivative, only the derivative a of vector with respect to scalars. Regardless, I don't believe that d/dv would be the correct notation for the directional derivative. The directional derivative in the direction of v is defined using the gradient function as

 

In contrast, the notation you are suggesting indicates that there is some vector-valued function that is dependent on v and we want to differentiate it with respect to v; in other words, find the rate of change of the function as the vector v changes. I don't think that's what the directional derivative does; often v is a constant vector. It doesn't make sense to take the derivative with respect to a constant.

You could argue that the gradient of f at x is equal to df/dx

 

but I haven't given it enough thought to decide if that idea is problematic.

I actually agree with you on the d/dt notation, but several places on Wikipedia people have gone through and changed my italic d's to upright, so I assumed that was some sort of style guide convention. MarcusMaximus (talk) 16:24, 6 October 2008 (UTC)

When I wrote d/dv, I was thinking in terms of a connection, specifically in its manifestation as a covariant derivative. From that perspective, d/dt is differentiation with respect to the a vector which points in the direction of the positive t axis and has length 1.
I think it's reasonable to write the gradient as df/dx. But this sort of notation is never used, nor is d/dv, and I don't know why. I'd guess that in practice, it's messier than the usual notation.
Currently the WP MoS specifies that the d in d/dx can be either italic or upright. It also says not to change it from one to the other except for consistency within the article (or if you're rewriting from scratch). So if someone changes your italic d's to upright, I think you ought to revert! Ozob (talk) 21:57, 6 October 2008 (UTC)

As an engineer I'm not well educated in the more abstract principles of connection and contravariance. It sounds like you can call a scalar variable a vector in its own right? That seems trivial to me, but I'm not here to judge the value of mathematical concepts.

After thinking about it more, it seems to me the reason the gradient is not generally written df/dx is because the function f(x1,x2,x3) of which we are taking the gradient is not truly a function of the vector x in general. It is a scalar function of an arbitrary number of scalars. If you wrote it out, you generally wouldn't have any occurrences of the vector x made up of components in the standard basis. However, each scalar has its own axis and can be thought of as a scalar component of a vector x = x1i + x2j + x3k, but the vector x does not actually appear in the function f itself, unless you wanted to recast the function so that every instance of (x1,x2,x3) was a dot product of x with i,j,k. MarcusMaximus (talk) 05:01, 7 October 2008 (UTC)

Yes, I am calling a scalar a vector. I agree that it's trivial: A one-dimensional vector space is just the space of scalars. But if you think in those terms then you can see an analogy between the notation d/dt and the notation d/dv.
I disagree that f is not a function of x. To every x, you get a unique output f(x); this is the definition of a function. The way it's written is unimportant. As you say, you could write dot products every time you want to take a component. Doing so demonstrates that f is a function of x. But you could also add and subtract the same number, for example, f(x) = (x·i + 1) - 1. This formula specifies an algorithm to compute f(x) which goes like: Take the dot product of x and i, add one, then subtract one. As a function, this is the same as f(x) = x·i, even though the latter specifies a different algorithm, namely, take the dot product of x and i. In the same way, the distinction between f(x) = x·i and f(x) = x1 is only a matter of the algorithm specified by the formula, not the function.
I suppose that if one wanted to be very pedantic, then one could argue that even though x is an element of a three dimensional vector space with a fixed basis, the underlying set of that vector space might not be the set of all triples (x1, x2, x3); instead it might be something like "all polynomials of degree less than or equal to two with basis 1, x, x2". But vector spaces with chosen basis are naturally isomorphic to arrays of numbers, so I don't think it's an important distinction. Ozob (talk) 22:39, 7 October 2008 (UTC)

I do see your analogy now. Here's my theory on d/dv question.

Usually people expect to the arguments to appear explicitly in the formula for the function. With that in mind, f can be cast as a function of x; however, in general, it is not cast that way.

To use your example f(x) = x1, there is no connection between f and x unless you use a separate equation to define the relationship between x and x1, such as x = x1i + x2j + x3k. In that case, most people would infer that x is a function of x1 rather than the inverse (even though both are true). In that case it's counterintuitive to say that f is a function of the vector x; most people would say that both f and x are functions of x1.

The alternative is to define x1 = xi, which does create the necessary intuitive order. But even in this instance you don't get the explicitness most people expect and desire. If you write

f(x) = x1 where x1 = xi,

even though it is strictly true, it makes about as much intuitive sense as writing g(z) = y. People wonder what the heck you're talking about, until you tell them, oh, by the way, y = h(z). Then they wonder why the heck you wrote g(z) on the left side instead of g(y) but wrote the right side in terms of y rather than explicitly in terms of z.

So back to the main point, I think it is clear that we should really be talking about the gradient, not the directional derivative. In my opinion the main reason the gradient is not commonly written as d/dv is that it often requires counterintuition. I'm not arguing that you can't write the gradient as d/dv, but it only makes sense sometimes and it's more general to just use  .

It's also not obvious to me (without knowing a priori how to take the gradient) what exactly you're supposed to do when you see df(x)/dx. You're actually taking the derivative of a scalar function f with respect to the sum of three vectors x1i, x2j, x3k, which doesn't make a lot of sense to me. It appears that you have to take the formula on the right hand side of f(x), which is a scalar algebraic expression containing x1,x2,x3, and implicitly differentiate it with respect to x using the chain rule. Then you have an algebraic expression in terms of x1, x2, x3, dx1/dx, dx2/dx, and dx3/dx. Next you have to find expressions for dx1/dx, dx2/dx, and dx3/dx. I'm not sure what those would be, since there is no such thing as vector division that I know of. Maybe they are i,j,k, respectively? MarcusMaximus (talk) 08:54, 8 October 2008 (UTC)

That's a good point that while f(x) = x1 is counterintuitive. I was only paying attention to the formal logic, but you're right.
If we continue to think of d/dx as the gradient, then dx1/dx would be the gradient of x1, hence i, and similarly you'd get j and k for x2 and x3. And no, there is no such thing as vector division in general; division is a very, very restrictive condition.
I agree that the "right thing" to think about in a lot of cases is the gradient, not the directional derivative. A better thing to think about, it turns out, is differential forms and exterior derivatives. But in order to make sense of them and their relationship to the gradient, you need to distinguish between the tangent space and cotangent space, and this is more effort than most people are willing to make. Ozob (talk) 17:42, 8 October 2008 (UTC)

Forgive my ignorance, but could you be more explicit? You said, "If we continue to think of d/dx as the gradient, then dx1/dx would be the gradient of x1, hence i, and similarly you'd get j and k for x2 and x3."

This becomes circular, because I'm trying to find an expression for dx1/dx in order to prove that d/dx is the gradient. I just need to show that dx1/dx is i, but how do I get there? MarcusMaximus (talk) 23:26, 10 October 2008 (UTC)

Well, after a little thought,

 

therefore

 

Since di/dx is the zero vector, we are left with

 

If you substitute the derivative of x the equation becomes a triviality, collapsing to dx1/dx = dx1/dx. It seems that the value of dx/dx must be an entity that dot multiplies with the vector i and leaves it unchanged. The only thing I know of that does that is the unit dyadic (ii + jj + kk), but I have no idea how that would come into play. I'm still stumped. MarcusMaximus (talk) 04:04, 11 October 2008 (UTC)

Hmm. If I understand you correctly, you're looking for a way to manipulate the symbol d/dx (using the usual rules) that makes the expression for the gradient pop out. I'm not sure whether or not one can do this. At some point one has to define what d/dx means; I was intended to define it to be the gradient because that seemed to be the only way to make it consistent. You seem to be looking to define it by certain properties that it should satisfy (the product rule, chain rule, etc.), but I don't think that's enough to get a unique expression out. (My reasoning comes from Riemannian metrics; the gradient is what one gets by taking the exterior derivative of f and contracting with the metric, so if one changes the metric one gets a different gradient. If you normalize d/dx by choosing the values of dx1/dx and similar expressions in the other variables, you should have enough information to determine the gradient. But without that there are too many possible metrics.)
I also think that if f is a vector-valued function, then d/dx should mean (by definition again) the total derivative. This is consistent with the use of d/dx to mean "derivative with respect to the variable x"; if x happens to be a one-dimensional vector (so that it's okay to write x = x), then d/dx is equal to d/dx just like it should be.
Does this work? I think I've dodged circularity this time by making a definition. Ozob (talk) 23:07, 11 October 2008 (UTC)

That is rather unsatisfying. It just seems to me (with no particular reason) that we should be able to prove that d/dx is the gradient. We already know how to take derivatives according to the definition based on the limit of the slope of the secant, and we know what a vector is. What we don't know, I guess, is what it means to take the derivative with respect to a vector, but I was hoping to be able to derive it. MarcusMaximus (talk) 09:14, 12 October 2008 (UTC)

Hey, I just got an idea! Let's go back to the definition. I'm going to take this as the definition of the derivative in one variable:
 
(That is to say, f'(x) is the unique number which makes the above equation hold.) OK, now I make everything a vector:
 
Now, would you agree that df/dx ought to satisfy this equation? But this is exactly the definition of the total derivative.
When f is a real-valued function, this is not exactly the same as the gradient; instead it's a linear transformation R3R. If one fixes a basis of R3 (which is the same as fixing a Riemannian metric at the point we're differentiating at) then one can identify R3 with its dual space and convert the linear transformation into a vector in R3. That vector will be the gradient. Ozob (talk) 20:19, 12 October 2008 (UTC)

Excellent. So ƒ’(x) (ƒ prime) is dƒ/dx, the gradient of ƒ(x). Then ƒ’(x)h is the juxtaposition of two vectors...a dyadic? Or does there need to be a dot product in there, ƒ’(x)•h, because the other two terms in the numerator are scalars? Even if ƒ(x) is a vector function you have the sum of two vectors with a dyadic. MarcusMaximus (talk) 07:07, 13 October 2008 (UTC)

No,   is a linear transformation. If f is a real-valued function, then   is a linear functional: It takes a vector, in this case h, and returns a scalar. When f is a vector valued function, then   can be written as a matrix. (Sorry for the TeX, but I'm having strange formatting issues.)
I think it's worth pointing out that a dyadic tensor is the same thing as a linear transformation (see the last paragraph of the dyadic tensor article as well as dyadic product). In that interpretation, f'(x)h is the application of h to the dyadic tensor f'(x); in the case when f is real-valued, however, it's a dyadic tensor in the basis vectors ii, ij, ik (and no others). Ozob (talk) 15:00, 13 October 2008 (UTC)

So is this definition operational? Can I plug in real expressions and do some algebra and calculus to get a real answer? Starting with ƒ(x) = xi = x1,

 

MarcusMaximus (talk) 03:21, 17 October 2008 (UTC)

I don't think so. Usually one proves that for a continuously differentiable function, the total derivative equals the matrix of partial derivatives; then one computes the total derivative using partial derivatives. But I don't know how else to do it. Ozob (talk) 18:35, 17 October 2008 (UTC)

I suppose it is useful in mathematics to prove something is true after you have the correct answer. However, for an engineer using applied mathematics, it is important that definitions be operational. I'll keep looking. MarcusMaximus (talk) 23:32, 18 October 2008 (UTC)

The point of such a theorem is that d/dx can (under mild hypotheses) be computed easily. Taking partial derivatives is easy, and putting them in a matrix is even easier. So this theorem tells you that the total derivative, while a priori hard to compute, is actually easy to compute. I agree that the computation can't be done easily from the definition itself, but that's no different from computing any complicated derivative. Think about differentiating a function where you need to use the product and chain rules in combination several times; directly from the definition, it's a huge and nearly impossible mess, but with the product and chain rules it becomes easy. Computing the total derivative is analogous: An hard definition (the one above) which one proves to be the same as an easy rule (take partial derivatives and put in a matrix). Ozob (talk) 01:00, 19 October 2008 (UTC)

Vector spaces

Hi Ozob,

I have asked for a GA review at the round table, but people are busy/dizzy with LateX formatting and icon questions ;) I thought you might be interested in having a look at vector spaces and giving it a GA review? This is the page. Thanks a lot. Jakob.scholbach (talk) 14:13, 1 December 2008 (UTC)

AfD nomination of Bishop–Keisler controversy

 

An article that you have been involved in editing, Bishop–Keisler controversy, has been listed for deletion. If you are interested in the deletion discussion, please participate by adding your comments at Wikipedia:Articles for deletion/Bishop–Keisler controversy. Thank you. Mathsci (talk) 05:42, 14 December 2008 (UTC)

Radius of convergence

Hi, your tweak does help, thanks. Do you happen to know what the coefficients of x^n would be in standard form?Regards, Rich (talk) 02:08, 16 December 2008 (UTC)

It'd be something to do with the round down of the base 2 logarithm of n. Looks like it'd be
 
Ozob (talk) 02:53, 16 December 2008 (UTC)

Table on trig identities

[3] I disagree. Provide reasoning as to why it's clearer. It looks to me as though there are more functions than there are. The parenthesis makes it clear what are the abbreviations. —Anonymous DissidentTalk 05:12, 20 December 2008 (UTC)