Math and Programming

A blog about my love of mathematics, functional programming, and computers.

Integer Partitions in Python

| Comments

Spoiler alert: if you don’t want solutions to project euler problems 31 and 78, stop reading now.

I want to write about these problem, because although they seem similar (and are mathematically similar) they require somewhat different solutions in order to run in a reasonable amount of time. Also, it will give me a chance to explain my thought process a bit.

Problem 31

This problem asks how many different ways you could make 2 GBP with regular British currency (1, 2, 5, 10, 20, 50, 100, or 200 pence). You might recognize this as being similar to the problem of finding partitions of integers.

How do we approach this problem? First, we notice that there are a lot of choices to make when trying to find a way to build 200 out of our list of values, so a “bottom-up” approach probably won’t work. That is, if we decide to start with a coin value, say 5, and then pick another coin value, say 10, and keep doing this until the sum equals or exceeds 200. Then repeat for every starting value.

If you do it this way, it’s basically impossible to do any sort of dynamic programming. You’ll need to do the same computations over and over again, and there’s not way to avoid this.

A much better solution is to use the tree-like structure natural to this problem and recurse. Here’s what I mean: start with 200, then substract off every possible coin value: .

There’s a catch though! Once you subtract a coin of value , you’re never allowed to substract a coin of value larger than in the same branch of the tree. This is to guarantee uniqueness of the partitions (i.e. so we don’t count both and as different partitions.)

Here’s a sketch (literally) of what’s going on in my head when I think about finding ways to make 5 pence out of other coins.

Tree-like structure for this algorithm

Notice that each branch of the tree gives a different partition of 5 in terms of the coins 5, 2, and 1:

To get the number of way to make 5 out of these coins, simply add up the number of branches in the tree!

Of course, there is a way to speed this up: we can have the computer keep track of already computed values! But remember, in the above image, even though , the number of branches from each of these nodes differ. This is because we aren’t allowed to subtract coins bigger than 1 off the value of . So, we don’t only need to remember the value of , but also .

Here’s a solution in python:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
coins = [
	1,2,5,
	10,20,50,
	100,200
	]
def how_many_ways(amount,coinList):
    cache = {}
    ways = 0

    if (amount == 0):
        return 1
    for x in filter(lambda y: y <= amount, coinList):
        if (amount - x, x) in cache.keys():
            ways = ways + cache[(amount-x,x)]
        else:
            cache[(amount-x,x)] = howManyWays(amount - x,
            	filter(lambda y: y <= x, coinList))
            ways = ways + cache[(amount-x,x)]

    return ways

print how_many_ways(200,coins)

Something to think about: what happens if we didn’t have a coin with value 1? How would we need to change this?

Problem 78

After I wrote the function above, I went looking for other problems where I could use it. I found problem 78, which asks to compute the smallest positive integer for which the number of partitions, , is divisible by 1,000,000.

Notice that finding integer paritions is a special case of the above coin-finding problem, such that for any value , the set of coins is {}.

If you want, you can start printing out values for using the above function. You’ll find that gets very large very quickly (hundreds of digits once reaches the thousands). Even with caching, we need something much quicker.

Euler to the rescue!

Luckily, Euler proved the wonderful formula: where and if then . So at least with this formula, the time it takes to compute is linear in .

Unfortunately, this approach only works for partitioning, and we don’t have a similar formula for general coin-finding problems (at least, I don’t think so…).

So, the next step is to write a dynamic algorithm in python that implements this. (Don’t try to run the following code. Trust me. I’ll explain in a bit.)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
#List of p(n), starting with p(0) = 1.
part = [1,1,2]

#Uses Euler's recursion to find p(n).
while (part[len(part)-1] % 1000000 != 0):
    val = 0
    k = 1
    n = len(part)
    while (n >= (k*(3*k-1))/2):
        val = val + ((-1)**(k-1))*part[int(n-(k*(3*k-1))/2)]
        k = k+1
    k = -1
    while (n >= (k*(3*k-1))/2):
        val = val + ((-1)**(k-1))*part[int(n-(k*(3*k-1))/2)]
        k = k-1
    part.append(val)

print(len(part)-1, part[len(part)-1])

If you’re interested, you can insert a print statement into the loop and watch get big. By the time you hit I think you’ll be looking at 100 digit numbers. You’ll also wonder why you haven’t seen 6 zeroes at the end of any of these numbers yet, and eventually python will try to convert infinity to an integer and throw an error.

There’s one last thing to do, and I’ll admit it took me an embarassingly long time to think of it. We only need to keep track of the last 7 digits of . Because of the way modular arithmetic works, and because Euler’s formula only involves adding and multiplying, instead of keeping in the list “part”, just store % .

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
#List of p(n), starting with p(0) = 1.
part = [1,1,2]

#Uses Euler's recursion to find p(n).
while (part[len(part)-1] % 1000000 != 0):
    val = 0
    k = 1
    n = len(part)
    while (n >= (k*(3*k-1))/2):
        val = val + ((-1)**(k-1))*part[int(n-(k*(3*k-1))/2)]
        k = k+1
    k = -1
    while (n >= (k*(3*k-1))/2):
        val = val + ((-1)**(k-1))*part[int(n-(k*(3*k-1))/2)]
        k = k-1

    #We only need to keep the last 7 digits, since we're only
    #adding, and if a % m = 0 and b % m = 0 then 
    #a + b % m = 0!!!
    part.append(val % 1000000)

print(len(part)-1, part[len(part)-1])

Why I Left Academic Mathematics

| Comments

I want to write something about why I decided to leave academic mathematics. I feel I owe it to colleagues and friends I’ve worked with, and I want to force myself to think clearly about my reasons. Aside from a few close friends, I haven’t talked to many people about this yet.

Things that aren’t reasons I’m leaving

Right away, I’d like to clear up some misconceptions people might have about my decision.

I’m not leaving because I felt I wasn’t good enough

or that I wasn’t smart enough to make it. It’s usual, in a field like mathematics, to feel dumb or to feel like you’re failing or not getting anywhere. I’ve had my fair share of this. But I made progress often enough, and enough people had confidence in me that I never really doubted my own ability to suceed. I wasn’t the smartest or the quickest mathematician (not even close) but I am smart enough.

When I talked to my advisor about my decision to leave, he told me that he thought I had the potential to become a good academic mathematician (which, if you know my advisor, is one of the biggest compliments I’ve ever recieved). So there was never really any doubt about my ability to have a successful academic career.

I’m not leaving because of the people or the culture.

Although I’m certain that leaving mathematics is the right thing for me, it’s also not an easy decision. I’m choosing to leave a community of exceptionally intelligent, enthusiastic, quirky, fun, and kind people. It’s hard for me not to think of it as a betrayal; these people encouraged me, answered my dumb questions, went out for beers with me, let me stay with their friends and put up with me for a month (Ric, Catherine), and invited me to give talks.

The community of Geometric Group Theorists (particularly you Out() and mapping class people) is a wonderful one, and I will miss attending conferences and events terribly (oh man, I’m kind of tearing up writing this part).

And if there’s any doubt about this, I still love mathematics.

The reasons

Let’s start with the easy one:

I don’t really enjoy teaching at a university.

I love explaining things I’m excited about to people. Ask any of my friends or mathematical acquaintances: if I get excited about something, I will try to explain it to you, and you will probably get annoyed with me if you spend enough time around me.

I hate dealing with all the bureaucratic crap that comes with teaching. I’m tired of all the students who don’t want to be there. I’m tired of students trying to get me to change their grade.

I loved talking to students one-on-one in office hours. It gave me a good chance to get an idea of where they stood mathematically and tailor my explainations to them. Unfortunately, most of the students who showed up to office hours for help were the good students. The bad students that showed up were almost always there to ask me to change a grade.

I really did care about teaching the material and trying to pass my excitement about mathematics along to the class. But I’m also of the philosophy that if a student wasn’t willing to try, I wasn’t going to put in any extra effort to babysit their education (which probably explains why my mediocre-good students gave me excellent reviews and really liked me, while the bad ones consistently trashed my teaching in reviews).

I have lots of hobbies

My greatest flaw as a mathematician (just winning out against my distain for working out details) is that I had too many other hobbies. I firmly believe that a good academic mathematician (at least in their graduate school and post-doc years) needs to be obsessed with their research. You can have other hobbies, but given the choice, you’d ultimately pick mathematics over them if push came to shove.

I have nothing but respect for people that have the ability to do that. I don’t. I heard a lot of advice of the form “you should get really good at one thing” or “you should pick something you’re really passionate about and do that.” I’ve spent more time that I’d care to admit thinking about why I didn’t have that one thing I was really passionate about. I’ve often wondered what was wrong with me. I mean, I love mathematics, but I can’t see myself doing only that for the rest of my life. In fact, I couldn’t think of one thing I’d give up everything else for.

I love math. I love to ride my bike. I love to work on bikes. I love to play video games. I love to problem solve. I love to code. I love to learn about computer science. I love to read. I love to build things. I love to think about philosophy. I tried for a long time to pick one (math) and force myself to love it above everything else, and I think it made me really unhappy for a long time.

My ephiphany? There is actually something I’m passionate about, and it’s something I missed my entire adult life until extremely recently. I’m passionate about learning new things. About solving problems. About not getting bored doing the same thing over and over.

I need time (and money) to have hobbies.

I need to be able to obsess about things that aren’t my job.

I don’t like to move

As anyone who’s ever been at a conference with me can tell you, I don’t like being away from the things that help me de-stress.

Let’s list some of those things: a bike, a coffee shop I like, video games, the places I normally eat, good internet.

I enjoy some degree of consistency in my life, and I don’t feel an academic life can provide this. The idea of moving for a post-doc and eventually a tenure track job every 3 years for the next 6-9 years sounds terrible to me. This is compounded by the fact that I wouldn’t get to pick where I’d be moving.

I want to chose where I live.

I want the ability to have a stable relationship.

“But wait!” you say. “I know lots of mathematicians, even grad students who have partners and even kids.” That’s true. But those families move around with that mathematician. My girlfriend happens to have a good software job in my favorite part of the country (which has great weather, lots of jobs, and great mountain biking). And even if I were single, the notion of being able to have a serious relationship without asking the other person to move every three years (or doing long-distance) would be attractive.

Let’s talk about money

I’m not leaving to get rich, but I believe that the hard work and talent of most mathematicians goes unrewarded. These are people who work absurd hours, deal with piles of bullshit from university administrators and students, and still manage to produce original research and educate undergrads. Unless you’re one of a handful of super-star mathematicians, you will get paid as much as or less (at a tenure/tenure-track position) than many first-year software developers fresh out of college.

I appreciate that people are so passionate about mathematics that they’re willing to put up with that. I’m not.

I would be happy with enough money to have a nice apartment/house, have a nice bikes, buy loved-ones nice things for birthdays/holidays, and travel some.

Please don’t take this the wrong way

Any mathematicians reading this, I don’t mean to offend. I really am in awe of people who love math so much that they’re willing to put up with all the things I described. I don’t understand it, but I think it’s amazing.

I will miss it.

Representation Stability Part 1

| Comments

At the Joint Math Meetings in January 2014, I saw Benson Farb give a talk about some very beautiful, very cutting-edge mathematics called representation stability. It’s still very new (the first paper is from 2013), but it’s a very powerful tool that should prove useful in both purely mathematical and applied settings (namely physics).

Before I talk about respresentation stability, I should talk about the notion of stability. The word stability is used in many different settings, but I will define precisely what I mean here.

Stability

Let $X$ be a sequence of spaces

For our purposes, they will normally be groups or manifolds and the maps will be group homomorphisms (usually injective) or continuous maps. Any category will do though. Let’s call the category where $X$ lives $\mathcal{C}$.

Let $F$ be a functor from $\mathcal{C}$ to another category $\mathcal{D}$. We say that $X$ is stable with respect to $F$ if, for some $n > 0$, we have $F(X_i) \cong F(X_{i+1})$ for all $i > n$. In other words, the maps in the sequence $F(X) = $

are eventually isomorphisms. This is a very general concept, and there isn’t much one can say about general stability (there are just too many categories and functors!). However, there are some very special functors that mathematicians care about.

Homology

Homology, in general, is an advanced topic more suited to a graduate course in algebraic topology. The wonderful thing about homology, though, is that while the technical details are opaque to those without the required background, one can give a fairly complete intuitive definition without too much pain and suffering. I’m writing this section for the reader who knows little-to-nothing about homology - if you’ve taken an algebraic topology course, feel free to skip it.

Okay, so what it is?

(Very) roughly speaking, homology is a measurement of a toplogical space’s inability to be collapsed to a point. Before I get into any details, let’s look at some examples:

Consider $S^1$, the circle. I encourage the reader not to think about the circle as “living in” the the plane or any larger space, but to think about it as an object on its own. $S^1$ cannot be collapsed to a point - intuitively speaking, one cannot deform $S^1$ to a point without cutting it somewhere (remember, $S^1$, is not living in the plane. It is true that one can collpase the circle in $\mathbb{R}^2$ to a point in $\mathbb{R}^2$).

(Actually, for the industrious reader, if one thinks of $S^1$ at the unit interval $[0,1]$ with $0$ and $1$ identified, the result that $S^1$ is not homotopy equivalent to a point can be made precise using only freshman calculus!)

Similarly, the 2-sphere $S^2$ cannot be deformed to a point. But if we look more closely, we see that the “holes” in these two spaces are different. Indeed, any 1-dimensional loop on $S^2$ can be deformed to a point, but nevertheless, there is still a “hole” in $S^2$ which prevents us from continuously deforming it to a point.

We need a way to measure these differences quantitatively; after all, this is mathematics! That’s what homology does.

The homology of a space $X$ is a collection of $\mathbb{Q}$-vector spaces $H_i(X,\mathbb{Q})$ for . The $\mathbb{Q}$ isn’t really important here; indeed, it’s possible to replace $\mathbb{Q}$ with any other field or ring (in which case, you get a module instead of a vector space). But I assume most of my readers are comfortable with vector spaces at least, so we’ll work over $\mathbb{Q}$.

Furthermore, if is a continuous map, then it induces a linear map for all $i$. In other words, each $H_i(–,\mathbb{Q})$ is a functor from the category of topological spaces to the category of vector spaces over $\mathbb{Q}$! The nice thing about this is that homology is an invariant: that is, if $X \cong Y$, then . So, if we want to tell if two spaces are different, we can (in theory) compute their homology and see if they are different dimensions!

Let’s return to our examples to see what homology does. I haven’t told you how one actually defines the vector spaces $H_i(X,\mathbb{Q})$ yet, so we can’t do any computations, but I can tell you the answers.

For a point,

for all $i$. For ,

which tell us that cannot be deformed to a point. What about $S^2$? One can compute

which tells us that $S^2$ cannot be deformed to $S^1$. Furthermore

so $S^2$ cannot be deformed to a point.

What about stability?

Remember, stability is a property of a sequence of spaces with respect to a functor, and now we have a handful of functors to play with! Let’s go back to the original setting: a sequence $X =$

of spaces. We say that $X$ is homologically stable if for every $i > 0$, $X$ is stable with respect to $H_i(-,\mathbb{Q})$. Note that the point at which the seqeunce

stabilizes might depend on $i$.

That’s enough for one sitting. Next time?

In part 2, I want to discuss a motivating example where homological stability fails but “should” hold, and explain what respresentation stability is, and how it solves our problem.

Using Lapply to Import Files to R

| Comments

One of the trickest parts, for me, of learning a new language is figuring out how it interacts with the outside world (i.e. the rest of your computer). This post might seem dumb to people with even a few months of R experience, but I decided to post it anyway, if only to document my learning process. When I started learning R I was given the following task: you have a directory which stores several .csv files, say 001.csv, 002.csv, … 332.csv and a task which requires you to do something with all of them. The task itself is irrelevant, so we’ll ignore it. How do you import all these files efficiently?

My first instinct (probably because the file names were all numbers) was to loop over all of the names. But, of course, I needed to ensure all of the zeroes were still there. So I needed to do something like

1
2
3
4
5
6
for (id in 1:332){
    num = formatC(id, width = 3, flag = "0") #creates a 3-digit string representing an integer
    D <- paste(directory, paste(num, "csv", sep = "."), sep = "/") #D is the file name string
    x <- read.csv(D)
    data <- c(data,x) #data is a list of all the data frames we needed to import
}

But that’s gross, and in any case, it doesn’t work if the names of the files don’t live in some list you can easily access (like 1:332). Recall, though, that R has some nice “map” functions, namely ‘lapply’. Also

1
dir()

returns a list of the names of files in the working directory. So

1
2
setwd("where your .csv files are")
data <- lapply(dir(),read.csv)

returns a data frame consisting of all the .csv files you needed to import. Voila.

Who Am I?

| Comments

My name is Brian Mann, and I’m a mathematician turned programmer/computer scientist.

Currently, I’m a doctoral student that the University of Utah doing research in the field of geometric group theory under the advisement of Mladen Bestvina. Specifically, I study the outer automorphism group of finite rank free groups, and also the dynamics of group actions on $\mathbb{R} \mbox{-trees}$. I’ve written/co-authored two papers: Hyperbolicity of the Cyclic Splitting Complex and Constructing Non-uniquely Ergodic Arational Trees. (A word of warning: these papers are not for beginners. They require basic knowledge about standard tools in geometric group theory, especially Bass-Serre Theory. I will likely talk more about the background required in later posts.)

For an overview of what I’ve been thinking about recently, you might want to look at slides from a recent talk I gave at the Joint Math Meetings.

I love mathematics. But for various reasons (which I will likely write about in the future) I have decided not to pursue a career in academics. I also love computers. I’ve always been fascinated by them and enjoyed all of the little coding projects I’ve needed to do in my career as a mathematician, but have never had time to really learn to code. Now that I’m graduating, I’ve decided to change that. I’m teaching myself Python, R, Haskell, and C++ and trying to get a job in the Seattle area which uses my mathmematical knowledge/skills to computer-related applications (data science, crytpography, software development, etc…).

This blog is mainly to document my transformation from pure mathematician to a hybrid mathematician/computer-scientist/programmer. Although I am currently focusing on learning Python and R (which I feel will be most useful in my job hunt), I’m hoping to post a lot about Haskell. Which brings me to my next point…

I fucking love Haskell.

Look at this:

1
fibs = 1:1:zipWith (+) fibs (tail fibs)

I won’t lie and say this line of code made me fall in love with Haskell, because that’s not true. But it’s certainly one of the most beautiful lines of code I’ve ever seen. I fell in love with Haskell when a good friend (a hacker-turned-algebraic geometer-turned-computer security expert) told me about it. A language based on category theory, you say? Yes please.

Let me explain. I know a lot of category theory. I spent much of my undergrad at the University of Michigan taking graduate level math classes in algebraic geometry. And algebraic geometers love category theory. Why? Because it’s an incredibly expressive language which allows mathematicians to phrase difficult questions in simple, elegant terms. Sound familiar?

When I learned about Haskell, I didn’t know much about programming. But I knew enough to guess at the expressive power of a language based on categorical ideas. At the time, I poked around Learn you a Haskell a bit, but never made any real progress; as an undergraduate taking graduate level mathematics and who was sure he wanted to go into academics, the only real motivation I had to learn Haskell was curiousity. Now I have quite a bit more.

I love Haskell for the same reason I love mathematics. It’s beautiful. Expressive. Powerful. Working in Haskell teaches me to think in different ways about programming. And perhaps one of the reasons I like it so much is because it’s so familiar. It feels like mathematics, because it is mathematics.

Anecdote: I was curious about what a monad was, so naturally I went to the wikipedia page. Admittedly, I was confused. Programmers think about things a little differently than mathematicians. So I went to this wikipedia page and learned what a monad was in category theory. And everything made sense.

So what’s this blog for anyways?

My goal here is simple. I want to tell people about cool things I’ve learned and am learning. I want people to know that high-level mathematics isn’t scary. It’s beautiful and fun, just like Haskell. I want people to be curious and ask questions about what I post. I want to force myself to explain things in simple terms so I understand them better myself.