Hacker News new | past | comments | ask | show | jobs | submit login

In college I was taught Linear Algebra from the operator point of view, rather than with matrices. That way theorems are clearer and the student's understanding is deeper, but for applications it's better to study from the matrix point of view and with lots of examples. Kuttler's book was refreshing in that sense. His other books are excellent, too. If you have been studying pure math (or french-style applied math which is just pure math with a concentration in Analysis) they are a light and fun complementary read.



Although I understand that matrix-soup is kind of the entropic endstate of all high school mathematics pedagogy, I think it is a real tragedy: In the UK especially even the best and brightest barely touch any real mathematics until after they leave secondary school so they leave with almost literally no idea of what university mathematics consists of. It's all well and good training people to be engineers, but the universities end up teaching the whole syllabus to them again in about 6 weeks. It's just shit.

The only reason why I am now doing theoretical physics (I was in the dumb group initially and worked my way up largely by myself) is because I read a calculus textbook by accident and got hooked when I was 14. Even when I made it to the top of the pile I still wasn't allowed to do anything more than calculus because the module system means we had to choose as a class whether to do group theory or not.


I understand, I too had a similar experience in high school. My comment was largely referring to my context: This semester I'll finish a mathematical engineering degree (in my country engs. are 12 semesters long) and the abstract way is rewarding but without applications can only be endured so far. It does not help that a large fraction of mathematical engineers after they graduate end up doing some software work consisting in statistics, optimization or some other advanced mathetical concept. With a dropout rate of ~80%, not many people are willing to put themselves through rigorous analysis classes on top of the engineering requirements for so many years to get a job doing something you could have learned in a more practical way. But we are very well paid and the unemployment rate is 0% since there are maybe 10 graduates in the whole country each year.


Good luck to you. I wish you every success. The education in this country has been shit for such a long time. It is and has been pretty devastating for so long.

Apologies for the negative waves.


No negativity measured, actually


The best, by far, book on Linear Algebra that elegantly teaches it from Vector Spaces and Linear Operators point of view is

Paul Halmos "Finite-Dimensional Vector Spaces"

For instance, the way Halmos introduces the determinant of a matrix (or an operator) is the most consistent, elegant and simple way I ever encountered. OTOH, in Kenneth Kuttler's LinAlg books the determinant is pulled out of the thin air like in 1000+ other similar books.


While Halmos' book is lovely, I still prefer the geometric definition of determinant to the algebraic one: The determinant of a matrix is the signed volume (or area) of the parallellepiped spanned by its columns. Equivalently, the determinant of a linear map is the volume of the image of a unit cube by that map (or any arbitrary shape of volume one, not necessarily a cube). All the algebraic properties of the determinant follow easily from the geometric definition (multi-linearity, anti-symmetry, etc).

Really, I don't see what you like about Halmos definition of the determinant... I have just read it (page 99 of my copy) and he admits that it is a "somewhat roundabout procedure", just after giving the definition! There's other references that seem much cleaner (e.g. Spivak's calculus on manifolds, using exterior algebra).


> Really, I don't see what you like about Halmos definition of the determinant...

Halmos shows (it is almost trivial) that the space of anti-symmetric n-forms Wn over L_n is 1-dimensional. Wn(Ae1,...,Aen) = const*Wn(e1,...,en). This scalar const is called determinant. It has all the properties you would ascribe to Volume like volume spanned by collinear column-vectors is zero. This is a nice bridge to geometry in Ln. Also, in a space of just one page (p.99) he introduces determinant and proves its main properties like det(A*B) = det(A)*det(B) and therefore det(A^-1) = 1/det(A).


The determinant of n vectors {vi} relative to a particular basis {ei} in an n-dimensional vector space is the scalar-valued ratio:

( v1 ∧ v2 ∧ ··· ∧ vn ) / ( e1 ∧ e2 ∧ ··· ∧ en )

The signed volume per se is just the n-vector: v1 ∧ v2 ∧ ··· ∧ vn

Generally working with the wedge product is more pleasant and conceptually clearer than working with determinants. Among other things we don't need to make an arbitrary choice of basis or unit n-vector. There's also no reason to limit ourselves to n terms. v1 ∧ v2 is also a reasonable quantity to use, etc.


The beauty of Halmos' derivation, which is similar but not identical to exterior algebra (wedge product), is that his approach is basis independent. A determinant by his definition is scalar invariant over all bases. It is very geometrical in nature.


The determinant inherently involves a basis (or at the very least a choice of unit n-vector). Or if you like you can think of the determinant as a function of a square matrix (grid of numbers), rather than a function of a collection of vectors.

When you take the basis out, that's the wedge product, which inherently includes the orientation. Conveniently, there is only one degree of freedom for n-vectors in n-dimensional space. When we take the quotient of two n-vectors in n-dimensional space we therefore get a scalar.


Let me sketch a way to get the determinant basis-free:

Say we live in an n-dimensional vector space V and have an endomorphism f : V -> V. Now, we consider the pullback [1] f* : Λⁿ(V) -> Λⁿ(V) induced by f on the vector space of n-linear alternating forms Λⁿ(V) on V.

This is just an endomorphism on Λⁿ(V). However, Λⁿ(V) is one-dimensional, hence necessarily invariant under f*. This means f* has an eigenvalue (!). This eigenvalue is what we usually call the determinant of f.

This is completely independent of any choice of basis, orientation, or an inner product.

[1] That is, given an element w ∈ Λⁿ(V) and an arbitrary n-tuple v₁, ..., vₙ of vectors from V, we have (f*w)(v₁, ..., vₙ) = w(f(v₁), ..., f(vₙ))


> and have an endomorphism f

And the "outermorphism" f̱ of your linear transformation, when limited to considering its application to an arbitrary pseudoscalar, returns another pseudoscalar which necessarily has the same orientation, making that a scaling operation.

So what we could say in that case is that f̱(p) / p = d (some scalar, the "determinant" of f), where p is any pseudoscalar p = v1 ∧ v2 ∧ ··· ∧ vn.

This turns out to be about the same as what I wrote a few comments upthread. We are just dealing with

f̱( v1 ∧ v2 ∧ ··· ∧ vn ) / ( v1 ∧ v2 ∧ ··· ∧ vn ) = d

= ( f(v1) ∧ f(v2) ∧ ··· ∧ f(vn) ) / ( v1 ∧ v2 ∧ ··· ∧ vn )

instead of ( v1 ∧ v2 ∧ ··· ∧ vn ) / ( e1 ∧ e2 ∧ ··· ∧ en ) = d

And now we are talking about a property of a linear transformation instead of a property of a collection of n vectors.

In many practical situations, an oriented quantity like v1 ∧ v2 ∧ ··· ∧ vn is more useful than a scalar ratio d though.


If you define determinant as volume, how do you define volume? I agree that it's pedagogically sound to motivate the notion of determinant by the volume of a parallelepiped, but using volume as the definition of determinant just doesn't sound right.

And strictly speaking, determinant is not volume because the former is dimensionless. It is the scaling factor of the volume when a geometric entity is transformed by a linear map.


> If you define determinant as volume, how do you define volume?

How do you define "length" and "area"? I guess that if you don't have already a very firm grasp of these basic concepts, then there's no business for you (yet) in studying determinants. Much later, once you master thoroughly lengths, areas, volumes and hypervolumes; and also linear algebra and determinants (however they are defined), then you can embark in the elegant definitions using exterior algebra and the like. Notice that Halmos itself says that his treatment is appropriate for a *second* course in linear algebra, preparing the field for the later study of infinite-dimensional spaces.

> And strictly speaking, determinant is not volume because the former is dimensionless.

This really depends on the context. If you are working on euclidean space, you already have "units" and the determinant makes sense in itself, as the volume spanned by sets of vectors.


I find both the geometric and algebraic definitions quoted here unsatisfying. What is a “volume” spanned by a vector space of polynomials or co-tangent functionals?

*A* determinate function (not the) is simply a skew symmetric n-linear map into the underlying field.

Done. Now we get the volume interpretation when it’s appropriate, the wedge product interpretation, and the generalization to finitely generated projective modules (if a determinate function exists, there are additional conditions needed for the existence.)


Thanks, I'll look it up. The best textbook from which I studied (operators) was Elon Lima's Algebra Linear. Sadly the only physical copies are sold in Brazil.


If you want to see the matrix point of view done well, there's Linear Algebra Done Wrong: https://www.math.brown.edu/streil/papers/LADW/LADW.html. You can read a bit about the motivation for doing it that way on that website.

The title is a reference to a somewhat well-known book, Linear Algebra Done Right, which avoids using determinants to develop the theory (resulting in a somewhat novel/cleaner presentation). It's unfortunately not freely available online (published by Springer – I would suspect most university students can get it freely through their library's website, however).


LADR was freely available at Springer at least at some point in time. Under their open access program. I couldn't find it again in a few minutes' search, so it may be gone.


Ah yes, a fellow Brazillian. I've alwys found Elon's book on Linear Algebra a masterpiece. Coupled with the exercises book, going through it is an eye opening experience.


Could you please talk about alternative ways of learning about linear algebra?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: