HockeyRoman All American 11811 Posts user info edit post |
I'm looking to brush up on my Calculus skills before the Fall and I'm curious if anyone has any old calculus textbooks lying around they'd be willing to let me borrow until about mid-August. Thanks! 6/21/2012 12:45:42 PM |
NeuseRvrRat hello Mr. NSA! 35376 Posts user info edit post |
just check one out from the library 6/21/2012 12:55:31 PM |
lewisje All American 9196 Posts user info edit post |
There is many a PDF available...
This one is accompanied by a series of videotaped lectures. Gilbert Strang - Highlights of Calculus (1991): http://ocw.mit.edu/resources/res-18-001-calculus-online-textbook-spring-2005/textbook/
This one takes an unusual tack, eschewing the epsilontics of typical approaches with a notion of numbers that are smaller than any positive real number and yet are greater than zero (infinitesimals), along with their reciprocals (infinities), in a self-consistent system of hyperreal numbers. H. Jerome Keisler - Calculus: An Infinitesimal Approach (2000, rev. Feb. 2012): http://www.math.wisc.edu/~keisler/calc.html
This is a good refresher... Paul Garrett - Notes on First-Year Calculus (latest rev. 29 May 2012): http://www.math.umn.edu/~garrett/calculus/
This is another offbeat free online textbook, with helpful Java applets (mathematicians and physical scientists were early adopters of the format, so you'll find tons of Java applets, rather than Flash files, used to illustrate mathematical concepts on the Web) linked around... Dan Sloughter - Difference Equations to Differential Equations[/link] (2000): http://www.synechism.org/wp/difference-equations-to-differential-equations/
These are just a few of the many freely-available high-quality textbooks on the subject; Calculus is probably the most-covered subject among freely-available textbooks (next is linear algebra). The book by Keisler is probably the best if you want to [i]understand calculus, because infinitesimals are much more intuitive than arguments involving limits (and that's how Leibniz himself thought about his development of the calculus, as well as how contemporary mathematicians thought when dealing with their own notions of infinite summations, products, etc., even though the first rigorization of the subject was based on the Newtonian limit notation). 6/21/2012 8:05:13 PM |
HockeyRoman All American 11811 Posts user info edit post |
That kicks ass. Thank you! I swear one day I'll be able to take the derivative of 1/x without first making it x^-1. 6/21/2012 8:12:54 PM |
lewisje All American 9196 Posts user info edit post |
It's easiest to use that transformation once you've learned the general rule, but going straight from the definition of the derivative, it is indeed easiest to use "1/x":
(with infinitesimals) Let a be an infinitesimal, then the derivative of 1/x is the standard part of (1/(x+a)-1/x)/a. Simplify the numerator to get (x-(x+a))/((x+a)ax). Then simplify the numerator again to get -a/((x+a)ax). Then cancel common factors to get -1/((x+a)x). Finally, take the standard part: -1/x^2.
(with limits) Basically the same as above, except that a is now a nonzero number and at the end, take the limit as a approaches 0.
Now the general rule requires a long and winding road...
First, the natural logarithm ln(x) is defined for x>0 as the antiderivative of 1/x that has value 0 when x=1 (or equivalently, the integral of 1/t as t ranges from 1 to x), and the properties that may first have been covered in pre-calculus are derived from this definition (so you can be assured that we're defining the same thing).
Then the natural exponential e^x is defined as the inverse of ln(x) and its properties are derived from this definition, the general exponential b^x for b>0 is defined as e^(ln(b)x), and its properties are derived, so again you can be assured that it's the same "taking to a power" function that you covered in high school.
Finally, using a theorem about the derivatives of functions whose inverses have known derivatives, and also the chain rule, the derivative of x^r (where x>0 and r is an arbitrary real number) is derived using the fact that x^r=e^(r*ln(x)). By this point, other methods will have been used to find the derivative of x^q, where q is a rational number and x is in the natural domain (nonzero if q<0, nonnegative if the denominator of q in lowest terms is even); thereby the general rule, which I have carefully avoided stating at all in this post, will be established.
(In complex analysis, an upper-undergraduate-level course, the rule for the derivative of z^a with respect to z is made even more general, using an "analytic continuation" to define ln(z) for all nonzero z and a continuity argument to deal with the case z=0, where ln(z) approaches -infinity no matter how z approaches 0; basically, that simple rule for the derivative of a power function applies wherever the function itself makes sense, it's that elegant.)
[Edited on June 22, 2012 at 5:01 AM. Reason : tl;dr, I gave one of many reasons you'll go over logarithms in calculus yet again 6/22/2012 4:59:54 AM |
HockeyRoman All American 11811 Posts user info edit post |
Haha. Thanks. I just sat and wrote out 1/x dx = -1/x^2, 1/x^2 dx= -2/x^3, etc until the pattern was ingrained in my head. And when I need to take the antiderivative I just recall that particular relationship. 6/22/2012 3:10:13 PM |
TULIPlovr All American 3288 Posts user info edit post |
Most thrift stores have a book section, and most will have at least one calculus textbook for $2-3. 6/23/2012 2:53:52 AM |
lewisje All American 9196 Posts user info edit post |
You might still have occasion to use the definition, when the derivative exists everywhere but is not continuous everywhere; as an example, the function x^2*cos(1/x) (where x!=0) can be made into a continuous function by defining its value at 0 to be 0, and even though the derivative is not continuous at 0, it still does *exist* at 0...
Using the product and chain rules, the derivative where x!=0 is 2x*cos(1/x)+sin(1/x), which like sin(1/x) oscillates infinitely often in every neighborhood of 0; the first term oscillates infinitely often with a range of oscillation approaching 0 in the limit, while the second term oscillates infinitely often between -1 and 1, so the derivative is indeed discontinuous at 0. However, using the definition of the derivative directly, the derivative of this function at 0 can still be calculated... ((0+a)^2*cos(1/(0+a))-0)/a=(a^2*cos(1/a))/a=a*cos(1/a), which approaches 0 as a->0.
Below is some more advanced language; you won't need to deal with this stuff until at least upper-undergraduate real analysis.
By Darboux's Theorem (all derivatives satisfy the conclusion of the intermediate-value theorem), that's as weird as discontinuities of derivatives can be; if a function is differentiable, but not continuously so, all points of discontinuity must be essential singularities (rather than jumps, removable discontinuities, or poles), and the value of the derivative at a point of discontinuity must be inside the range within which infinite oscillation occurs in all neighborhoods, because otherwise there would be a neighborhood of the discontinuity for which the intermediate-value property fails on its closure, so that a closed interval would be mapped to something other than an interval.
Put another way, the antiderivative and derivative are not always inverse operations: If you take the derivative of an antiderivative of a function with a pole or a jump discontinuity, you will get a function that is undefined at the original function's point of discontinuity, because the antiderivative of that function is not differentiable there; if you do this with a removable discontinuity, you'll "remove" it, and I think below is how it goes for an essential singularity... If f(x) has an essential singularity at X, then as long as the lim inf of f(x) as a->X from the left and the right are both less than each lim sup of f(x) as a->X, f(x) can be the derivative of a function, and if we define min inf to be the greater of the two lim infs, and max inf to be the lesser of the two lim sups, and if F(x) is the derivative of an antiderivative of f(x), then F(X) is the arithmetic mean of the min inf and max inf. (Of course, going the other way around, an antiderivative of a derivative of a differentiable function is always the original function modulo a constant, so in this way they're not quite inverses either.) 6/23/2012 5:39:47 AM |
simonn best gottfriend 28968 Posts user info edit post |
you can just watch lectures if you prefer
http://ocw.mit.edu/courses/#mathematics 6/23/2012 1:56:50 PM |
lewisje All American 9196 Posts user info edit post |
Indeed, and although the very first link I posted ITT was accompanied by lectures, that was for a high-school class; there is also a set of lectures by David Jerison for the actual MIT intro. calculus class elsewhere on OCW: http://ocw.mit.edu/courses/mathematics/18-01-single-variable-calculus-fall-2006/
In this class, a guy who looks uncannily like Rick Santorum covers the material of a typical first-year calculus sequence in just one semester, because MIT students are just that smart and nearly all of them who haven't tested out still did well in some high-school calculus class; anyway, the videos are also in a more readily watchable format on YouTube: https://www.youtube.com/course?list=PL590CCC2BC5AF3BC1
(Also I'm tempted to improve Better Wolf Web yet again, to turn links to YouTube EDU classes into embedded video players with the playlists queued up...) 6/23/2012 7:23:17 PM |
catalyst All American 8704 Posts user info edit post |
maths ITT 6/26/2012 5:07:02 PM |
|