I recently completed an introductory course on multivariate calculus, and I'm still trying to come to grips with the concepts taught in the vector calculus segment. Right now, I'm reviewing the concept of divergence.
I understand the lexical definition of divergence, that (in R3 at least) it's the volumetric density of the outward flux of a vector field. The formulaic definition that Wikipedia and Wolfram offer makes similar sense:
divF=limV→0∬
What I don't understand, however, is how
\mathrm{div}\,\mathbf{F}=\nabla\cdot\mathbf{F} follows from the above statement. Something like \mathrm{div}\,\mathbf{F}=|\nabla\mathbf{F}| seems to make more sense to me, because \mathrm{div}\,\mathbf{F}=\nabla\cdot\mathbf{F} just adds up the partials in what are essentially three random directions (\mathbf{i},\,\mathbf{j},\mathbf{k}, after all, are the conventional basis vectors for \mathbb{R}^3), whereas \mathrm{div}\,\mathbf{F}=|\nabla\mathbf{F}| isn't as... arbitrary? I suppose.
In short: how do you proceed from the formal definition of divergence as the limit of flux over volume as volume goes to zero to getting that divergence is equal to the dot product of nabla and the vector field?