My rating: 5 of 5 stars

Imagine a history book that examines the philosophical foundations of mathematics, specifically the quest that culminated in the years leading up to the First World War to establish all mathematical reasoning on a firm logical basis. That book would have a lot of ground to cover. It would have to disentangle some complex mathematics to present to the non-specialist in a meaningful way, as well as shed light on the manic, driven, fascinating characters behind this story, people like Bertrand Russell, Kurt Gödel, David Hilbert, and Ludwig Wittgenstein. Finally, it would need to give at least some light to the background historic scaffolding upon which this drama played out: the turn of the century, the First World War, the rise of Nazism, and interwar Vienna. At tall order for any book, let alone a comic book.

So now imagine that book as a graphic novel—moreover, as a graphic novel that succeeds at all these tasks. That’s what you’ve got with *Logicomix*, a complex, stirring, well-executed, multi-layer work that brings to life one of the most compelling chapters of mathematical and philosophical history. In a general sense, the graphic novel (which is hefty, weighing in at over 300 pages not counting the reference material at the end) could be considered a stylized biography of Bertrand Russell (1872-1970), 3rd Earl Russell, the British logician and grandson of the Prime Minister, who began his career with an attempt to bring logical rigor to all mathematical reasoning.

Beyond Russell’s stylized biography (stylized because the historical interactions in the graphic novel are artfully fudged for better dramatic effect), the narrative of *Logicomix* plays out on three levels. Level one is the primary chronological narrative, but level two is the fact that this primary narrative is presented as a lecture delivered by the aging Russell in America near the end of his career. Hecklers in the audience want to know whether Russell, who was famous for his conscientious objections during the First World War, will join them in protesting America’s entry into the Second. Russell promises them their answer in the lecture, and these interactions, as Russell summarizes his career and offers insights on the role of logic in human affairs, bookend the first level narrative and interrupt it occasionally as audience members get rowdy or impatient.

This first narrative—the series of chronological flashbacks forming Russell’s lecture—is the main medium of the story telling in *Logicomix*. We see Russell as a young, troubled child in an authoritarian home finding the basis of truth and certainty in mathematics. As a student in Cambridge, Russell becomes obsessed with the logical foundations of mathematics, catalyzed by the 1900 challenge of David Hilbert and using the new logical formalism of Gottlob Frege to establish mathematics on completely rigorous, firm foundations. This is the work he spends the first decades of his career on, collaborating with Alfred North Whitehead to produce their *Principia Mathematica*, which—as Russell recounts wryly—took over 200 pages to prove that 1 plus 1 equals 2.

If this sounds like the stuff of esoteric mathematics, it is. But the success of *Logicomix* is making the story—which depends on the mathematics—both accessible and engaging. It provides enough of the technical details for the reader to get a conceptual notion of set theory, upon which Russell’s work rested, and the damning implications of Russell’s paradox, which undermined these very foundations. The narrative continues, always through Russell’s eyes though his own work leaves the center stage, to explore Gödel’s incompleteness theorem, Wittgenstein’s *Tractatus Logico-Philosophicus*, and the rise and fall of the Vienna Circle on the eve of the Second World War.

It’s not quite history (as the authors admit they’ve altered the timeline a bit to make Russell have meetings with characters that he likely never met), but it is a sweeping and effective story of people and their ideas. It’s not quite philosophy or mathematics either, but there’s enough of both to make *Logicomix* intellectually rich and rewarding—from the logical puzzles themselves (boiled down to their conceptual themes) to exploration of philosophical approaches to mathematics, contrasting Gödel’s Platonic to Poincare’s inductive to Wittgenstein’s linguistic approach to the true meaning of mathematics and its relation to the physical world or the human mind. It’s a story with meat on its bones, executed in bright, clean, understated art that brings the characters and the locales to life without overshadowing the concepts it explores.

As with history of thought done well, the book is as much about the people as the ideas with which they wrestled. One of the primary themes in is the question of the sort of mind or personality it takes to devote a life to wrestling with the basics of logic. We see this most with Russell and the background of madness he worked and fought against, as well as in the periphery characters of Cantor, Frege, Gödel, and Hilbert. The close relationship between madness and logic—as well as questions of the place of logic in life—are explored by Russell himself in the course of his lecture and by the authors and artists of the book as they make their appearance (and interact with the reader) throughout in the third “meta” level of the narrative.

It is this third level of narrative—and the balance it takes to run an additional narrative overtop of Russell’s lecture and his chronological flashbacks—that pushes *Logicomix* in some of its most interesting directions. This meta narrative represents the self-referential nature of the book itself (nicely complimenting the theme of paradox in logic arising through self-reference, as in the case of Russell’s set theory paradox and Gödel’s incompleteness theorem): the authors and artists are characters in their own book, working in modern Athens to write about Russell and the logical foundations of mathematics. We are invited into their studio to witness the discussions between them as they work. In this way, we simultaneously receive additional background to what happens before and after the events of the novel, the rational behind their specific approaches, and what we as readers are supposed to take from the story. As a bonus, we learn a lot about ancient Greek tragedy as well, which, tied elegantly to the discussion of logic and madness at the book’s conclusion, brings the work to its poignant conclusion.

Self-reference does not work well in logic and mathematical proof, but it does quite nicely in literature (*The Neverending Story*, Gene Wolfe’s *Peace*, and *The Princess Bride* immediately spring to mind). There are other parallels to draw between the axiomatic formalism of mathematics and the rules and consistency that govern storytelling, but that is a post for another time. Suffice it to say, *Logicomix* is incredibly rewarding and opens to door to a host of further readings in history, mathematics, philosophy, and logic, aided and abetted by the helpful reference section at the end. Not many books I read merit the creation of an entire new shelf of “to read” books on Goodreads, but this one did.

In fact, although genuine mathematical investigations were undertaken by later Pythagoreans, the evidence suggests that Pythagoras was a mystic who believed that numbers underlie everything. He worked out, for instance, that perfect musical intervals could be expressed by simple ratios.

**See other:** Which Greek Legends Were Really True?

Let us imagine drawing a line in 3-D space of length into the positive part of the coordinate system. We will draw this line at an angle above the x-y (horizontal) plane, and at an angle to the y-z (vertical) plane (see the figure below). When we drop a vertical line from our point onto the x-y plane it has a length , as shown in the figure below.

We then increase the angle by a small amount , and increase the angle by a small amount . As the figure shows, the small surface element which is thus created is just multiplied by , so .

To find the surface area of the sphere, we need to integrate this area element over the entire surface of the sphere. Therefore, we keep and we vary . We can go from a (the negative to the positive ), and from a (one complete rotation about the z-axis on the x-y plane), so we have

so, the total surface area of a sphere is

as required. In a future blog I will use spherical polar coordinates to derive the *volume* of a sphere, where will no longer be constant as it is here.

via Blogger http://ift.tt/1ydqbmN

(**Add.1**) For any two tuples (*k,x*) and (*l,y*) the addition operation is *commutative*: (*k,x*)+(*l,y*) = (*l,y*)+(*k,x*).

(**Add.2**) If *k*=*l*, (*k,x*)+(*k,y*) = (*k,x*+*y*), namely addition of sequences is principally *termwise*.

(**Add.3**) For any *k*,*l*, and *x* we have: (*k,x*)+(*l,*0) = (*k,x*). This says that (*k,x*)+0=(*k,x*), so 0 *is the additive identity*.

As for multiplication, we require that:

(**Mult.1**) For any two tuples (*k,x*) and (*l,y*) the operation is *commutative*: (*k,x*)(*l,y*) = (*l,y*)(*k,x*).

(**Mult.2**) If *k*=*l*, (*k,x*)(*k,y*) = (*k,xy*), namely multiplication of sequences is principally *termwise*.

The tuple 1(*k*) = (*k,1*) is a local multiplicative identity or the *unity at* *k*. So our system is locally a field, and universally an additive commutative group. In all the system is a ring.

We order the elements when *k*=*l* as (*k,x*) < (*k,y*) if and only if *x*<*y* and we determine the order of *a*=(*k,x*) and *b*=(*l,y*) according to the comparison of some quantities *w*(*a*) and w(*b*), where *a*<*b* if and only if *w*(*a*)<*w*(*b*). For example for the function *w*((*n,z*))=*nz*, we have *w*(*a*)=*kx* and w(*b*)=*ly.* Then *a*<*b* if and only if *y*>(*k/l*)*x* So if *k*=100 and *l*=1, (100,*x*)<(1,*y*) if and only if *y*>100*x*. Suppose *x* and *y* are both positive, so y has to catch up to be greater than 100 times that of x first before we can declare that the location-content tuple of y at location 1 is greater than that of x at location 100. This is the price y must pay for its being located a hundred times earlier. Here w is called the *order function* of the system. In this example its role is to put greater emphasis on tuples of later localities to give magnitudes.

We stipulate that the order function *w* be linear not only with respect to addition but also to multiplication, namely that for any constants A and B,:

(**Ord.1**) *w*(A*a*+B*b*) = A*w*(*a*)+B*w*(*b*).

(**Ord.2**) *w*(Aa.Bb) = A*w*(*a*).B*w*(*b*) = AB*w*(*a*)*w*(*b*).

So far so good. Now we shall see how things are doing when we allow infinitely many applications of addition. Suppose we have infinitely many tuples *z*(*k*)=(*n*(*k*),*x*(*k*)) for *k*=1,2,3,… Let σ:** N** →

(**Inf.Add.1**) The sum ∑(k=1,2,3,…) z(k) = ∑(k=σ(1),σ(2),σ(3),…) z(k)

This definition conforms to our intuition of sequences.

The order function of the sum has naturally the following property:

(**Ord.Inf.Add.1**) *w*(∑(*k*=1,2,3,…) *z*(*k*)) = ∑(*k*=1,2,3,…) *w*(*z*(*k*)).

But to anticipate if the right hand side series diverges, we might redefine it to be the sequence *t*(1), *t*(2), *t*(3),… where *t*(*n*)= ∑(*k*=1,2,3,…,*n*) *w*(*z*(*k*)). The new definition of the values of the order function does not hurt the ordering of finite sums of tuples, since it is preserved as it was before. But the disadvantages of sequence numbers already lurk in into this setting. This situation might make the axiom system seem circular: it uses what it formalizes. However there is a chance that we can change the function *w* which still suit the original intentions although we still propose this formally into the system. So this system actually has merit as a formalization albeit rather weakly.

Let us see a problematic case, say the sum *s*= ∑(*k*=1,2,3,…) (-1)^*k *using diverse order functions. With the function *w*((*k,x*))=*kx*, we have *w*(*s*) = ∑(*k*=1,2,3,…) *k*(-1)^*k*, which is the sequence -1,1,-1,1,… This means that s is incomparable to sums *r*= ∑(*k*=1,2,3,…) *r*(*k*) with *w*(*r*) = ∑(*k*=1,2,3,…) *kr*(*k*) forms a sequence of terms between -1 and 1. With the function *w*((*k,x*))=(*ln k*)*x*, we have *w*(*s*) = ∑(*k*=1,2,3,…) (*ln k*)(-1)^*k*, which is the sequence 0,-*ln* 2,-*ln* 2 + *ln* 3,-*ln* 2 + *ln* 3 – *ln* 4,-*ln* 2 + *ln* 3 – *ln* 4 + *ln* 5… = 0, *ln* 2, *ln* 3/2, *ln* 10/8,… Does the last function help? At most not so much, as long as the order is meant to increase the weight to the end. In fact maybe the order is independent of the choice of such function: *r*<*s* under order function *w*(1) if and only if *r*<*s* under another order function *w*(2) when the order functions have certain characteristics such as positivity.

Thus formalization does not help preventing from the troubles of incomparabilities of non-polynomial “numbers” and their associated functions. This is the price we must pay for the generality of the subject.

Infinity however might be formalized through the notion of philosophically fictitious -yet formally clever- element, namely infinitesimal *e* which is getting too small to regard when being multiplied by itself *n* times: *e*^*n*=0. This is called a nilpotent infinitesimal. Through Taylor series or the like, even transcendental functions cannot harm the system with problems of incomparability. For example the idempotent infinitesimal e (namely one that has *e*^2=0). Let’s regard informally that 0=1/*e*, the only infinity. Thus (1/*e*)^*n*=1/*e* for *n*=1,2,3,4,… Furthermore A(1/*e*)=1/*e* for all positive real number A and 1/*e*+B=1/*e* for any real number B. This facts would give *sin* (1/*e*) = 1/*e*+(sin 1-1)(1/*e)*=1/*e*, since the Taylor series of *sin* x is x-(1/6)x^3+(1/120)x^5+… Similarly *cos* (1/*e*) = 1/*e* also since the Taylor series of *cos* x is 1+(1/2)x^2+(1/24)x^4+…

Are we declining in our use of image devices in literature?

Arguing that we have flattened our culture or dumbed it down in the US actually manages to pack some rhetorical punch. Proving it, well that’s always a bit harder, but if one were to attempt such a thing the decline in symbolism and complexity in our popular culture would seem like a good place to start.

However, I think that’s the wrong approach.

It’s impossibly difficult to get a good aggregate sense of writing across the centuries. I’m sure someone could manage that task, an imaginary elite team of people willing to read the dross from over the centuries and chronicle the transformation of literature may be able to accomplish this herculean task… But let’s face it, they would want to be paid for their work at some point.

I want to counter the prevailing sensation through what is probably falacious reasoning. First, I think it is wrong to assume the literary skill of an entire generation, and while I think there are great improvements to be made in areas like education and cognitive skill specifically for the complex problems that we face now, that does not mean that we are any better or worse off than the generations that preceded us.

Importantly, we can note that there are still insanely beautiful works coming out, and some of them are on random blogs. You might not ever be able to find them, and even when I do manage to sift through the haystack, I miss parts of the writer’s imagery from ignorance. Even in a culture that is “in decline” we find works that suggest otherwise. The complexity of our culture has even added some brilliant subcultural works that are inspirational but someone esoteric – definitely harder to get a handle on.

One of the problems with teaching in our culture is getting people to take poetry seriously. It’s why “The Dead Poets Society” or “Finding Forester” come off so strong. If we are going to form good writers we have to take time. One reaction that I will have against our culture and education system is that we spend by far too much time on science and mathematics and far to little on literature and composition.

In other words, that limerick you wrote in eighth grade probably wasn’t sufficient to teach you how to dream up and compose a poem that resonates with your soul. The sonnet you wrote freshman year, yeah that rhyme scheme and meter takes quite a bit of practice to tease out the parts of your heart that do more than move blood about. It takes time to work with these and use them in a meaningful way.

We need that language formation. It allows us to express who we are in ways that we cannot through a bit of engineering and concrete, and poetry is that which gives all that engineering a scope and a scale within human existence. While it can be beautiful to throw up a building or craft an airplane wing – those lack meaning without human beings that can share their reactions to what is going on. An Albatross is a pretty bird, but becomes something so much more in the hands of Coleridge crafting a tale of an ancient mariner.

It might not help us save the planet or give us modern conveniences beyond the dreams of avarice, but time spent in verse takes that time saved and those last moments and drapes them with the depth of human meaning.

]]>*The Life in Papers of Sofie K*. is based on the life of Russian mathematician Sofia Kovalevskaya. It’s basically a literary/fantasy bio, maths meets magical realism. The premise comes from events in Sofie’s childhood – the poor kid had the most terrible Nanny. This cheerful woman apparently had a bent for macabre tales, and Sofie had life-long screaming nightmares about witches and werewolves and the Black Death. Now, I’m not a parent but I dare say if I had a small child I wouldn’t be tucking them into bed with stories about how people infected with bubonic plague were nailed inside houses that were subsequently set alight.

I mean, come on! Nanny dearest makes the Brothers Grimm look like fluffy bunnykins.

In my novella, Nanny’s nightmares come to life: Sofie is shadowed by a monster that is all her nightmares combined. But it’s not an enemy – it’s her shadow-self, sort of daemon and direwolf and Patronus, following Sofie through the universities of Europe. Because Sofie herself is also monstrous – or perceived as being so, with her enormous mathematical talent that allows her to break out of traditional gender roles and make her own way.

My last two novellas have been published by Masque Books, and Masque’s great. But this is too short for them, and I’ve been curious about self-publishing for a while now. I’ve a few more short novellas nearly ready to go that will probably end up the same way but *Sofie K*. is the first. Here’s a brief taster:

Sofie wants to study at the university at Berlin. This is not an easy thing: she lacks the parts preferred to do so, and the monster is no help. The professors do not want it in their libraries and their lecture rooms, with its thick soft paws and teeth that are too crescent for them, too light and lunar for comfort. Sofie cannot argue that they take the one without the other, cannot leave the monster at home. It is bound to her and will not leave, and she does not try to make it. If it were lost she would lose herself, so while she stops it from burying its teeth in those that thwart her, keeps it from poison and breaking and hammers, she does not leash or muzzle it.

Despite this, there is yet an avenue open to her: an appeal to the senate, a plea for scholarship. A chance for talent to rise above, to compensate for monstrosity. Sofie knows that she will need to sell herself, to walk the line between sty and nightmare, to look the part of a serious student. She holds herself as her father does, stiff-spined. She must appear more pig than monster, a curiosity in place of threat. Her shoes are sensible, rubbed clean of mud with thin lines of delicacy and they make a subdued clatter when she walks. An ugly bonnet makes her look older than she is. It shadows her mouth, and if Sofie ever had a tendency to simper then the bonnet does away with it, because she saw herself in mirrors before she came and this is not a hat to simper in. She does not wish to appear ridiculous.

(She knows some people will see her so anyway.)

The senate is unmoved. Talent is not enough for them–nor ability, nor enthusiasm. Even in clattery shoes she walks too lightly for them, has too much the whiff of monster. An aberration: something to be kept out with spells and salt and silver. They see only the bonnet (not how ugly it is, or the careful preparation behind it) and their fixed idea of women says that Sofie would be happier if she left maths well alone and circled millinery instead.

Anyway. If you want to read more you can find Sofie and her monster on Amazon. Check it out if it sounds like you!

]]>This afternoon’s work of choice, Seguin Board/Tens Board. The work is mastered when the child is able to associate a numeral and its quantity correctly for the numerals 10-99.

]]>and puts a ‘black box’ around it:

forgetting the internal details of the circuit and remembering only how the it *behaves as viewed from outside*. As viewed from outside, all the circuit does is define a relation between the potentials and currents at the inputs and outputs. We call this relation the circuit’s **behavior**. Lots of different choices of the resistances would give the same behavior. In fact, we could even replace the whole fancy circuit by a single edge with a single resistor on it, and get a circuit with the same behavior!

The idea is that when we use a circuit to do something, all we care about is its behavior: what it does as viewed from outside, not what it’s made of.

Furthermore, we’d like the behavior of a system made of parts to depend in a simple way on the external behaviors of its parts. We don’t want to have to ‘peek inside’ the parts to figure out what the whole will do! Of course, in some situations we *do* need to peek inside the parts to see what the whole will do. But in this particular case we don’t—at least in the idealization we are considering. And this fact is described mathematically by saying that black boxing is a functor.

So, how do circuits made of resistors behave? To answer this we first need to remember what they *are!*

Remember that for us, a **circuit made of resistors** is a mathematical structure like this:

It’s a cospan where:

• is a **graph labelled by resistances**. So, it consists of a finite set of **nodes**, a finite set of **edges**, two functions

sending each edge to its **source** and **target** nodes, and a function

that labels each edge with its **resistance**.

• is a map of graphs labelled by resistances, where has no edges. A labelled graph with no edges has nothing but nodes! So, the map is just a trick for specifying a finite set of nodes called **inputs** and mapping them to Thus picks out some nodes of and declares them to be inputs. (However, may not be one-to-one! We’ll take advantage of that subtlety later.)

• is another map of graphs labelled by resistances, where again has no edges, and we call its nodes **outputs**.

So what does a circuit made of resistors *do?* This is described by the principle of minimum power.

Recall from Part 27 that when we put it to work, our circuit has a **current** flowing along each edge This is described by a function

It also has a **voltage** across each edge. The word ‘across’ is standard here, but don’t worry about it too much; what matters is that we have another function

describing the voltage across each edge

Resistors heat up when current flows through them, so they eat up electrical power and turn this power into heat. How much? The **power** is given by

So far, so good. But what does it mean to minimize power?

To understand this, we need to manipulate the formula for power using the laws of electrical circuits described in Part 27. First, **Ohm’s law** says that for linear resistors, the current is proportional to the voltage. More precisely, for each edge

where is the resistance of that edge. So, the bigger the resistance, the less current flows: that makes sense. Using Ohm’s law we get

Now we see that power is always nonnegative! Now it makes more sense to minimize it. Of course we could minimize it simply by setting all the voltages equal to zero. That would work, but that would be boring: it gives a circuit with no current flowing through it. The fun starts when we minimize power *subject to some constraints*.

For this we need to remember another law of electrical circuits: a spinoff of **Kirchhoff’s voltage law**. This says that we can find a function called the **potential**

such that

for each In other words, the voltage across each edge is the difference of potentials at the two ends of this edge.

Using this, we can rewrite the power as

Now we’re really ready to minimize power! Our circuit made of resistors has certain nodes called **terminals**:

These are the nodes that are either inputs or outputs. More precisely, they’re the nodes in the image of

or

The **principle of minimum power** says that:

If we fix the potential on all terminals, the potential at other nodes will minimize the power

subject to this constraint.

This should remind you of all the other minimum or maximum principles you know, like the principle of least action, or the way a system in thermodynamic equilibrium maximizes its entropy. All these principles—or at least, most of them—are connected. I could talk about this endlessly. But not now!

Now let’s just *use* the principle of minimum power. Let’s see what it tells us about the behavior of an electrical circuit.

Let’s imagine changing the potential by adding some multiple of a function

If this other function vanishes at the terminals:

then doesn’t change at the terminals as we change the number

Now suppose **obeys the principle of minimum power**. In other words, supposes it minimizes power subject to the constraint of taking the values it does at the terminals. Then we must have

whenever

This is just the first derivative test for a minimum. But the converse is true, too! The reason is that our power function is a sum of nonnegative quadratic terms. Its graph will look like a paraboloid. So, the power has no points where its derivative vanishes except minima, even when we constrain by making it lie on a linear subspace.

We can go ahead and start working out the derivative:

To work out the derivative of these quadratic terms at we only need to keep the part that’s proportional to The rest gives zero. So:

The principle of minimum power says this is zero whenever is a function that vanishes at terminals. By linearity, it’s enough to consider functions that are zero at every node except one node that is not a terminal. By linearity we can also assume

Given this, the only nonzero terms in the sum

will be those involving edges whose source or target is We get

So, the principle of minimum power says precisely

for all nodes that aren’t terminals.

What does this mean? You could just say it’s a set of linear equations that must be obeyed by the potential So, the principle of minimum power says that fixing the potential at terminals, the potential at other nodes must be chosen in a way that obeys a set of linear equations.

But what do these equations *mean?* They have a nice meaning. Remember, Kirchhoff’s voltage law says

and Ohm’s law says

Putting these together,

so the principle of minimum power merely says that

for any node that is not a terminal.

This is **Kirchhoff’s current law**: for any node except a terminal, the total current flowing into that node must equal the total current flowing out! That makes a lot of sense. We allow current to flow in or out of our circuit at terminals, but ‘inside’ the circuit charge is conserved, so if current flows into some other node, an equal amount has to flow out.

In short: the principle of minimum power implies Kirchoff’s current law! Conversely, we can run the whole argument backward and derive the principle of minimum power from Kirchhoff’s current law. (In both the forwards and backwards versions of this argument, we use Kirchhoff’s voltage law and Ohm’s law.)

When the node is a terminal, the quantity

need not be zero. But it has an important meaning: it’s the amount of current flowing into that terminal!

We’ll call this the **current at the terminal** This is something we can measure even when our circuit has a black box around it:

So is the potential at the terminal It’s these currents and potentials *at terminals* that matter when we try to describe the behavior of a circuit while ignoring its inner workings.

Now let me quickly sketch how black boxing becomes a functor.

A circuit made of resistors gives a *linear relation* between the potentials and currents at terminals. A relation is something that can hold or fail to hold. A ‘linear’ relation is one defined using linear equations.

A bit more precisely, suppose we choose potentials and currents at the terminals:

Then we seek potentials and currents at all the nodes and edges of our circuit:

that are compatible with our choice of and Here **compatible** means that

and

whenever but also

for every and

whenever (The last two equations combine Kirchoff’s laws and Ohm’s law.)

There either exist and making all these equations true, in which case we say our potentials and currents at the terminals obey the relation… or they *don’t* exist, in which case we say the potentials and currents at the terminals *don’t* obey the relation.

The relation is clearly linear, since it’s defined by a bunch of linear equations. With a little work, we can make it into a linear relation between potentials and currents in

and potentials and currents in

Remember, is our set of inputs and is our set of outputs.

In fact, this process of getting a linear relation from a circuit made of resistors defines a functor:

Here is the category where morphisms are circuits made of resistors, while is the category where morphisms are linear relations.

More precisely, here is the category

• an object of is a finite set;

• a morphism from to is an isomorphism class of circuits made of resistors:

having as its set of inputs and as its set of outputs;

• we compose morphisms in by composing isomorphism classes of cospans.

(Remember, circuits made of resistors are cospans. This lets us talk about isomorphisms between them. If you forget the how isomorphism between cospans work, you can review it in Part 31.)

And here is the category

• an object of is a finite-dimensional real vector space;

• a morphism from to is a **linear relation** meaning a linear subspace of the vector space

• we compose a linear relation and a linear relation in the usual way we compose relations, getting:

So far I’ve set up most of the necessary background but not precisely defined the black boxing functor

There are some nuances I’ve glossed over, like the difference between inputs and outputs as elements of and and their images in If you want to see the precise definition and the *proof* that it’s a functor, read our paper:

• John Baez and Brendan Fong, A compositional framework for passive linear networks.

The proof is fairly long: there may be a much quicker one, but at least this one has the virtue of introducing a lot of nice ideas that will be useful elsewhere.

Perhaps next time I will clarify the nuances by doing an example.

]]>