I've been thinking a bit about the importance of a college GPA. This is prompted in part by looking at resumes of students for jobs and internships. This post of course is no representation for any of the companies I have worked for or do work for.
In industry we often review resumes by students at top public schools, including UC Berkeley and the University of Washington. We passed by at least one resume with a 4.0 and phone screened some candidates with an interesting background but otherwise lower GPA.
First, what's a "lower GPA"? These students are all generally smart (they have to be to get into a top public school), so they tend to have decent GPAs. Anyone who's consistently getting low grades has been weeded out by this time. A good GPA was generally above 3.7, a median GPA might be 3.67, an OK GPA is around 3.4, and a bad GPA is anything below 3.3. So these students are getting A's, B's, and maybe an occasional C (note that we did not see the courses they were taking unless they were offered on the resume). A GPA around 3.4 made us think, "maybe this girl/guy is really smart but just working on other projects." So if the student had other projects, we would consider her, but if she listed no projects, that GPA would be a bad sign.
I don't know about others reviewing resumes, but honor societies on resumes meant nothing to me. Maybe that's because I was rejected from the National Honor Society in high school and distinctly remember the students who got in making up community service activities for the application. Maybe it's because my parents took the bait from the National Society for Collegiate Scolars, which is really a sham organization. But that's another blog post.
Anyways, we passed by some students with very high GPAs, because they sometimes didn't seem involved in anything besides school, and their school classes weren't exactly what we needed. It was far more important to do internships at interesting places or to be involved in interesting research projects.
Of course, to get those top internships, or to work for a professor, or to even get into a top college, a high GPA is necessary. So my recommendation is that students focus on grades in high school and early college, and, if they're interested in an industry job, they should do a rockstar job at the companies they work at. And if they're not involved in extracurricular activities (a relevant job or research assistantship), they should have a damned high GPA.
If students want to go to graduate school, grades in higher-level classes are important. Again, though, it is much more important for students to demonstrate their ability to do independent research than for them to get a GPA (I know at least one guy who had a C in a calculus glass who got accepted to an Ivy-league CS graduate program, and I'm sure there are many more).
For what it's worth, I was always very school-oriented. I didn't have a 4.0 in college, but I was still in the higher end of the range. So I write this post arguing actually that such a focus on school is not absolutely necessary and can in fact be a bad choice, if it's at the expense of relevant experience.
The Wonderful Adventures of Sean
"ONCE there was a boy. He was, let us say, something like twenty seven years old; long and loose jointed and towheaded. He wasn't good for much, that boy. His chief delight was to eat and sleep, and after that he liked best to make mischief."
-- adapted with modification from The Wonderful Adventures of Nils.
Sunday, November 04, 2012
Tuesday, October 02, 2012
The inevitable accountability cliff for higher ed
The press has been heating up over the past few years with reports of high student loan debt and questionable employability after school. In this post I will argue that higher education will inevitably face a reckoning in which universities are held more accountable for student outcomes. This accountability is becoming more evident as colleges set the bar lower and lower to attract more students.
The goal of this article is to discuss the economic outcomes of a college education. There are many other benefits students find in college, such as building a social network or becoming "learned" about the world; these are hard to put a price tag on, short of careful experiments. So let's set those benefits aside and focus on what we are able to measure: earning potential and economic value.
The question of whether colleges are providing value for students is clear: students' outcomes can be determined in large part by their fields of study and the the quality of the school attended. But hundreds of thousands of students are graduating each year unable to find a job; and 44.7% of those students able to find a job end up working somewhere where a degree is needed. The cause of this is that schools hold little accountability for students' outcomes. Although colleges must meet a minimum standard for their student sto receive student loans, those rules enforced by the department of education set an extremely low bar (thanks to education loan lobbyists).
Since student loans are guaranteed by the federal government, and since lenders have easy recourse to garnish wages, this is likely to be an ongoing drag for the broader economy and for an entire generation of debt-ridden Millenials. As public opinion turns and outraged citizens begin to insist on greater accountability for colleges (some students are already suing their for-profit colleges for misleading them), politicians will take less in lobbyist donations and start to set the bar higher for colleges to recieve funding. Several high-profile reports will come out about colleges knowingly graduating students without proper job skills. Any college without a significant cash endowment would be well-advised to stop spending on new projects, sell unnecessary property through 2013 and 2014, and hold on tight.
Dovetailing with the impending student loan crisis are technological advances in higher education. Startups like Coursera are making it extraordinarily cheap for the highest-quality professors to teach students. This also means that the old brick-and-mortar college experience is becoming less necessary, and it is bad news for intermediate-quality faculty and institutions. The most elite colleges are likely to stay, as the private boarding schools they've always been, but countless intermediate institutions will find it hard to complete. In a few years, the worst careers will be those in these intermediate colleges -- those built up to require high tuition but which do not recieve the best of the best students.
It used to be the case that good colleges survived in part on graduates' kind donations. With the mandate to graduate as many college students as possible, and with interest rates so low, a new breed of college has evolved to subsist purely off of student loans. The problem with this brand of college is that its incentives are completely disconnected from students' outcomes. These colleges prey on low-income students, selling dreams of a bright future, and graduate these students at a low bar to make room for the next batch. There is no accountability to these students, short of meeting federally lax rules on the fraction of students who can default on their loans.
A real solution would have two prongs. First, this solution would focus on improving education at the primary and secondary levels. This Fall over two million students will be taking remedial education classes in college to learn what they did not learn in high school. This is simply unacceptable, in part because these students are paying for an education they should have gotten for free in high school; instead, they will be left saddled with debt for taking classes like Algebra II. It is also unacceptable because these colleges are expanding their remedial programs to accommodate these students, in a creep toward privatized secondary education. By improving primary and secondary education, the impact on the economy will be easier to measure from a basic accounting standpoint.
The second prong would be to increase schools' accountability to students to guarantee that their degree programs provide gainful employment. One solution here is to ensure that colleges receive a specific fraction of their income in alumni donations. Another possibility would be to require schools to underwrite student loans for a fraction of their students.
Further reading:
College majors by income: http://www.payscale.com/college-salary-report-2013/majors-that-pay-you-back
Grads having trouble finding work: http://www.nytimes.com/2011/05/19/business/economy/19grads.html
Student loan lobbyists: http://www.nytimes.com/2010/02/05/us/politics/05loans.html?pagewanted=all
Remedial education: http://www.bloomberg.com/news/2012-09-11/college-is-no-place-for-remedial-education.html
Colleges with the lowest graduation rates: http://www.huffingtonpost.com/2010/12/15/colleges-with-the-lowest-_n_797119.html
Targeting minorities: http://chronicle.com/article/For-Profit-College-Is-Accused/133225/
The goal of this article is to discuss the economic outcomes of a college education. There are many other benefits students find in college, such as building a social network or becoming "learned" about the world; these are hard to put a price tag on, short of careful experiments. So let's set those benefits aside and focus on what we are able to measure: earning potential and economic value.
The question of whether colleges are providing value for students is clear: students' outcomes can be determined in large part by their fields of study and the the quality of the school attended. But hundreds of thousands of students are graduating each year unable to find a job; and 44.7% of those students able to find a job end up working somewhere where a degree is needed. The cause of this is that schools hold little accountability for students' outcomes. Although colleges must meet a minimum standard for their student sto receive student loans, those rules enforced by the department of education set an extremely low bar (thanks to education loan lobbyists).
The implications of the education bubble
How will this education bubble turn out? We will probably continue to see growth in the student-loan industry as long as interest rates remain low -- this will last until 2015, according to the Fed. Once interest rates pick back up, the gravy train for schools will start to end, but the effects may take a few more years to come to a head, because students will need to start defaulting on their loans. This is scary, since the bubble is already over a trillion dollars, and it is not clear what will happen to these students in the meantime.Since student loans are guaranteed by the federal government, and since lenders have easy recourse to garnish wages, this is likely to be an ongoing drag for the broader economy and for an entire generation of debt-ridden Millenials. As public opinion turns and outraged citizens begin to insist on greater accountability for colleges (some students are already suing their for-profit colleges for misleading them), politicians will take less in lobbyist donations and start to set the bar higher for colleges to recieve funding. Several high-profile reports will come out about colleges knowingly graduating students without proper job skills. Any college without a significant cash endowment would be well-advised to stop spending on new projects, sell unnecessary property through 2013 and 2014, and hold on tight.
Dovetailing with the impending student loan crisis are technological advances in higher education. Startups like Coursera are making it extraordinarily cheap for the highest-quality professors to teach students. This also means that the old brick-and-mortar college experience is becoming less necessary, and it is bad news for intermediate-quality faculty and institutions. The most elite colleges are likely to stay, as the private boarding schools they've always been, but countless intermediate institutions will find it hard to complete. In a few years, the worst careers will be those in these intermediate colleges -- those built up to require high tuition but which do not recieve the best of the best students.
The highest graduation rate in the world
It's worth noting that the Obama administration is encouraging students to go to college, seeking "the highest college graduation rate" in the world. This is a nice goal if everything else is held equal, but this plan aims to optimize the wrong metric. The administration is getting what it is asking for, all right, as colleges pump out young English majors while their federally subsidized loans continue to inflate these colleges' profits. But this is more because standards are dropping and less because students are becoming educated.It used to be the case that good colleges survived in part on graduates' kind donations. With the mandate to graduate as many college students as possible, and with interest rates so low, a new breed of college has evolved to subsist purely off of student loans. The problem with this brand of college is that its incentives are completely disconnected from students' outcomes. These colleges prey on low-income students, selling dreams of a bright future, and graduate these students at a low bar to make room for the next batch. There is no accountability to these students, short of meeting federally lax rules on the fraction of students who can default on their loans.
Mitigating the education bubble
What do we really want when we ask for more college graduates? We're really asking for a college-educated population. We should adjust our metric of success to reflect this. Instead of aiming for the highest college graduation rate (a metric which can be gamed by setting the bar lower and inflating colleges with a student loan bubble), we should aim to produce the best-educated country in meaningful academic fields, such as science, math, and technology. This is easily tested by standardized exams, and it is less easily game-able. It is also more meaningful than "increase the college graduation rate".A real solution would have two prongs. First, this solution would focus on improving education at the primary and secondary levels. This Fall over two million students will be taking remedial education classes in college to learn what they did not learn in high school. This is simply unacceptable, in part because these students are paying for an education they should have gotten for free in high school; instead, they will be left saddled with debt for taking classes like Algebra II. It is also unacceptable because these colleges are expanding their remedial programs to accommodate these students, in a creep toward privatized secondary education. By improving primary and secondary education, the impact on the economy will be easier to measure from a basic accounting standpoint.
The second prong would be to increase schools' accountability to students to guarantee that their degree programs provide gainful employment. One solution here is to ensure that colleges receive a specific fraction of their income in alumni donations. Another possibility would be to require schools to underwrite student loans for a fraction of their students.
Further reading:
College majors by income: http://www.payscale.com/college-salary-report-2013/majors-that-pay-you-back
Grads having trouble finding work: http://www.nytimes.com/2011/05/19/business/economy/19grads.html
Student loan lobbyists: http://www.nytimes.com/2010/02/05/us/politics/05loans.html?pagewanted=all
Remedial education: http://www.bloomberg.com/news/2012-09-11/college-is-no-place-for-remedial-education.html
Colleges with the lowest graduation rates: http://www.huffingtonpost.com/2010/12/15/colleges-with-the-lowest-_n_797119.html
Targeting minorities: http://chronicle.com/article/For-Profit-College-Is-Accused/133225/
Saturday, July 21, 2012
Why "How was the Universe was Created" is a meaningless question
One of the biggest questions plaguing man through history has been "How was the Universe created?" In the context of the Universe, this is actually a meaningless question, for reasons I'll outline below. Once we frame the question better, it will become a bit more clear that the answer may not be quite so troubling.
The question "How was the Universe created" has been researched by physicists for hundreds -- thousands -- of years, and most modern physicists agree that the Universe as we know it came from the big bang some eleven billion to fifteen billion years ago. They have the physics of what happened down to a few milliseconds after the big bang (perhaps I'm off by a few orders of magnitude. Regardless, they can "see" back pretty far). Of course, the big bang isn't really an answer to this question, since it doesn't tell us how the Universe was created -- only that it happened, and that the Big Bang is the best description we have for the early stages of the Universe As We Know It.
My goal is not to criticize this work with the Big Bang. That research has been carried out by extremely smart people, and it provides useful information for answering the present question. My goal is instead to explain why the original question is poorly formed, if it is even meaningful at all.
The problem with this question is that the way it is framed presupposes intuitions about time that shouldn't be assumed -- first, that the Universe did not exist at some time, and that it did exist at a later time. Second, it assumes that time was (is) a meaningful, well-defined way to describe the Universe as a whole, and that it was defined when the Universe was created in such a way that we could use it to discuss the Universe existing and not existing meaningfully. Asking how the Universe was created is like asking, "What's at the edges of the Earth?" -- it's meaningful if the Earth is flat (and finite), but it's meaningless in many other cases (for example, it would be meaningless if we learned that the Earth is round).
This problem has its roots in that our intuition for the Universe is built up from our small glimpse of a very local part of it. We know about three three spatial dimensions and time, and we have results from some important scientific experiments. Einstein's relativity indicates that time is drastically different from how we perceive it. Only people who think hard about relativity have any intuition at how time works at the galactic (or super-duper-fast) level, and nobody--even physicists-- are really sure how it works at the sub-atomic level, which was really important during the Big Bang (many researchers have theories, but these theories are impossible to validate with our current technology). Some researchers even speculate that we may have more than one time dimension.
Our sense of time is defined by its relevance to us as organisms evolved to survive when living with the physics of the Universe as they exist at Earth-scale. There's no reason to think that this sense of time is particularly relevant at Universe-scale. Given its bizarre behavior and how little we know about it, it may be little more to the Universe than the sense of taste to a human -- great if you have it, but an afterthought if you don't.
Second, we have no indication that the Universe was created out of nothing. Our daily intuition clearly tells us that it's impossible to create something out of nothing, and without time before the Universe existed, it's virtually impossible for it to have not existed before it existed, because there was no before. You can't not exist at a certain time if time doesn't exist. These things, and our everyday experience with how things are created, suggest that the Universe was not created. It exists. (You may ask why I allow myself to use everyday experience as a premise when I argued against using everyday intuition about time. My point is that, if one is going to insist on using everyday intuition of time to ask these questions, then it's fair game for me to use our everyday intuition of causality to show why the question is invalid) We also have no reason to think that the Big Bang was an isolated incident, or that the Universe was created when the Big Bang happened. It's possible that the Universe was growing exponentially at the time of the big bang, halving in size every T units of time you look back. It's possible that the Universe periodically expands, contracts, expands again, and contracts again, ad infinitum. It's possible that time is a giant loop, and that it will repeat itself trillions of years hence. None of these are ruled out by the Big Bang. (N.B. I am not a physicist, but this is my "I-got-a-minor-in-physics" understanding.)
In short, it simply doesn't make sense to ask how the universe was created, when that question uses our conception of time, which is defined by the Universe itself. This is especially so when we have no reason to think that there is a beginning or end of time. Again, asking "How was the Universe created?" is like asking "What's at the edges of the Earth?".
This isn't to say it's not worth pursuing these grand questions, or that we shouldn't study the history of the known Universe. Rather, it's to say that we should be careful to ask our philosophical questions by framing them in terms that don't change with the object of the questions. Instead, perhaps it makes a bit more sense to ask the question, "Why does the Universe exist?" This question does not require time to be defined, when it still gets at the same issue.
This new question has its limits, again because we are basing this on our own sense of intuition and aesthetics. Still, I'll try my best.
First, this question is well-formed because of Descarte's original cogito, ergo sum: "I think, therefore I am". This statement is pretty much irrefutable, at least for the thinker. If I define the Universe to be whatever scaffolding it is that allows me to think, or to exist, or to think about existing, then it's also clear that the Universe does indeed exist. And it's perfectly legitimate to ask why it exists.
The next part of unraveling this question is establishing what we mean by why. When I ask why the Universe exists, I am asking why does it exist, instead of not existing.
When I think about the Universe not existing, I'll admit that my first conception is of a vacuum, a bit like in outer space, forever. Yet even this seemingly innocuous conception is flawed: there is no sense of time in a nonexistent Universe, and my conception of a vacuum assumes three spatial dimensions, ignoring the very fabric of the Universe that creates these dimensions. It is more accurate to simply imagine nothing, with no sense of time or space. A point, perhaps -- zero dimensions, with no sense of time or space -- may be the most reasonable approximation that we can conceive of. But remember that this point has no time -- it's a point, frozen in time, forever. Quite depressing.
Once the alternative to the Universe is framed in this sense, it's a bit less clear why the alternative -- pure nothing, or at most a point (i.e., zero dimensions) -- is in any way superior to the Universe as it is. Why should there be a point Universe instead of nothing? Tell me what nothing is, and how that differs from a point. I'll claim first that nothingness is equivalent to a point, and second that aesthetically a Universe with more dimensions is more preferable than one with zero dimensions. In fact, looking at it with intuition and aesthetics from information theory, the Universe should prefer more space to a point so that it has more room to spread things out in. This is because a single point has very little entropy, and the Universe would really feel much better if it weren't so cramped, by taking up four, nine, or ten (or more) dimensions.
But in my opinion it's actually harder to conceive of nothingness -- or a point -- than what we have. So maybe that's one (aesthetic) argument for why we should have the Universe over nothing. Because it's hard to conceive of the Universe otherwise.
This still doesn't answer the question of why the Universe exists. But I would feel equally enlightened if I knew the answer to this question instead of my original question ("How was the Universe Created"), especially knowing that the original question is meaningless. I could also imagine some reasonable arguments from first principle in favor of the Universe existing, or in favor of the Universe trying to figure out whether it should exist or not, in a secular way, and (oh, snap) having to exist while figuring that out.
Another argument for the universe existing is that there are some logically irrefutable facts, such as "a triangle has three sides", or "two plus three is five". These are true, regardless of whether the Universe exists or not. If the Universe did not exist, wouldn't such facts still be true? Or is it necessary to have some sort of logical scaffolding, provided by a Universe, for them to be true? I find the latter to be unlikely. If it's the case that logic exists when nothing else exists, is it really the case that nothing exists?
I won't be able to answer this question here and now, as it's been pondered publicly by philosophers for hundreds of years, but I hope that this discussion was helpful nonetheless.
The question "How was the Universe created" has been researched by physicists for hundreds -- thousands -- of years, and most modern physicists agree that the Universe as we know it came from the big bang some eleven billion to fifteen billion years ago. They have the physics of what happened down to a few milliseconds after the big bang (perhaps I'm off by a few orders of magnitude. Regardless, they can "see" back pretty far). Of course, the big bang isn't really an answer to this question, since it doesn't tell us how the Universe was created -- only that it happened, and that the Big Bang is the best description we have for the early stages of the Universe As We Know It.
My goal is not to criticize this work with the Big Bang. That research has been carried out by extremely smart people, and it provides useful information for answering the present question. My goal is instead to explain why the original question is poorly formed, if it is even meaningful at all.
Why it doesn't make sense to ask, "How was the Universe created?"
This problem has its roots in that our intuition for the Universe is built up from our small glimpse of a very local part of it. We know about three three spatial dimensions and time, and we have results from some important scientific experiments. Einstein's relativity indicates that time is drastically different from how we perceive it. Only people who think hard about relativity have any intuition at how time works at the galactic (or super-duper-fast) level, and nobody--even physicists-- are really sure how it works at the sub-atomic level, which was really important during the Big Bang (many researchers have theories, but these theories are impossible to validate with our current technology). Some researchers even speculate that we may have more than one time dimension.
Our sense of time is defined by its relevance to us as organisms evolved to survive when living with the physics of the Universe as they exist at Earth-scale. There's no reason to think that this sense of time is particularly relevant at Universe-scale. Given its bizarre behavior and how little we know about it, it may be little more to the Universe than the sense of taste to a human -- great if you have it, but an afterthought if you don't.
Second, we have no indication that the Universe was created out of nothing. Our daily intuition clearly tells us that it's impossible to create something out of nothing, and without time before the Universe existed, it's virtually impossible for it to have not existed before it existed, because there was no before. You can't not exist at a certain time if time doesn't exist. These things, and our everyday experience with how things are created, suggest that the Universe was not created. It exists. (You may ask why I allow myself to use everyday experience as a premise when I argued against using everyday intuition about time. My point is that, if one is going to insist on using everyday intuition of time to ask these questions, then it's fair game for me to use our everyday intuition of causality to show why the question is invalid) We also have no reason to think that the Big Bang was an isolated incident, or that the Universe was created when the Big Bang happened. It's possible that the Universe was growing exponentially at the time of the big bang, halving in size every T units of time you look back. It's possible that the Universe periodically expands, contracts, expands again, and contracts again, ad infinitum. It's possible that time is a giant loop, and that it will repeat itself trillions of years hence. None of these are ruled out by the Big Bang. (N.B. I am not a physicist, but this is my "I-got-a-minor-in-physics" understanding.)
In short, it simply doesn't make sense to ask how the universe was created, when that question uses our conception of time, which is defined by the Universe itself. This is especially so when we have no reason to think that there is a beginning or end of time. Again, asking "How was the Universe created?" is like asking "What's at the edges of the Earth?".
This isn't to say it's not worth pursuing these grand questions, or that we shouldn't study the history of the known Universe. Rather, it's to say that we should be careful to ask our philosophical questions by framing them in terms that don't change with the object of the questions. Instead, perhaps it makes a bit more sense to ask the question, "Why does the Universe exist?" This question does not require time to be defined, when it still gets at the same issue.
So, why does the Universe exist?
This new question has its limits, again because we are basing this on our own sense of intuition and aesthetics. Still, I'll try my best.
First, this question is well-formed because of Descarte's original cogito, ergo sum: "I think, therefore I am". This statement is pretty much irrefutable, at least for the thinker. If I define the Universe to be whatever scaffolding it is that allows me to think, or to exist, or to think about existing, then it's also clear that the Universe does indeed exist. And it's perfectly legitimate to ask why it exists.
The next part of unraveling this question is establishing what we mean by why. When I ask why the Universe exists, I am asking why does it exist, instead of not existing.
The alternative: the Universe does not exist
When I think about the Universe not existing, I'll admit that my first conception is of a vacuum, a bit like in outer space, forever. Yet even this seemingly innocuous conception is flawed: there is no sense of time in a nonexistent Universe, and my conception of a vacuum assumes three spatial dimensions, ignoring the very fabric of the Universe that creates these dimensions. It is more accurate to simply imagine nothing, with no sense of time or space. A point, perhaps -- zero dimensions, with no sense of time or space -- may be the most reasonable approximation that we can conceive of. But remember that this point has no time -- it's a point, frozen in time, forever. Quite depressing.
Once the alternative to the Universe is framed in this sense, it's a bit less clear why the alternative -- pure nothing, or at most a point (i.e., zero dimensions) -- is in any way superior to the Universe as it is. Why should there be a point Universe instead of nothing? Tell me what nothing is, and how that differs from a point. I'll claim first that nothingness is equivalent to a point, and second that aesthetically a Universe with more dimensions is more preferable than one with zero dimensions. In fact, looking at it with intuition and aesthetics from information theory, the Universe should prefer more space to a point so that it has more room to spread things out in. This is because a single point has very little entropy, and the Universe would really feel much better if it weren't so cramped, by taking up four, nine, or ten (or more) dimensions.
But in my opinion it's actually harder to conceive of nothingness -- or a point -- than what we have. So maybe that's one (aesthetic) argument for why we should have the Universe over nothing. Because it's hard to conceive of the Universe otherwise.
Arguments from first principle
This still doesn't answer the question of why the Universe exists. But I would feel equally enlightened if I knew the answer to this question instead of my original question ("How was the Universe Created"), especially knowing that the original question is meaningless. I could also imagine some reasonable arguments from first principle in favor of the Universe existing, or in favor of the Universe trying to figure out whether it should exist or not, in a secular way, and (oh, snap) having to exist while figuring that out.
Another argument for the universe existing is that there are some logically irrefutable facts, such as "a triangle has three sides", or "two plus three is five". These are true, regardless of whether the Universe exists or not. If the Universe did not exist, wouldn't such facts still be true? Or is it necessary to have some sort of logical scaffolding, provided by a Universe, for them to be true? I find the latter to be unlikely. If it's the case that logic exists when nothing else exists, is it really the case that nothing exists?
I won't be able to answer this question here and now, as it's been pondered publicly by philosophers for hundreds of years, but I hope that this discussion was helpful nonetheless.
Monday, April 23, 2012
Option recycling: logical assignment of equity at early-stage startups
I've been giving a lot of thought to equity distribution to early employees at startups. In this post I will describe a method for recycling employee stock options which offers early employees much higher stake in the early development of the company. Importantly, it provides an incentive for employees who are currently priced out of the startup labor market to enter it.
This post piggybacks off of a post I'd made earlier about why people start startups. Aside from the personal satisfaction one can achieve from this scenario, there are clear financial motivations to found a startup instead of joining one. To summarize: the founders get a large equity stake, and successive employees take on much, much smaller equity stakes -- often one or two percent, even when the company is still very much trying to figure out its business model.
The traditional model is bad because it prices certain employees out of the startup market. In this post I'll summarize a typical scenario and then describe an alternative method which involves recycling options and which could make startups more attractive to strong, experienced employees. In the very last section I outline a very simple way for employees to decide when to take on a new coworker and how much they should offer to the coworker.
A typical scenario
In a very common scenario, a startup founder (or founders) will retain at least 50% of the company and offer small equity grants, say 1% to 3% -- plus a modest salary -- to early employees. These grants become meaningful if the company becomes worth $20M or more: a 2% grant may then be worth $100K per year if it vests over four years. With a salary of $60K, this is like getting $160K per year. Sounds good, right? Potentially, until you start looking at salaries at some of the top tech companies, which are regularly offering packages around $200K per year to candidates with experience. Even if the company sells for $100M (not bad for four years!), the equity grant is $560K/year. This sounds promising until you consider that $100M is the exception rather than the rule. If even 1/4 of these companies make it that high (and that's being optimistic), and the rest fail, the expected value is back down to $185K/year. And that's with high variance: the usual risk/volatility trade-off -- in which investors take on volatility to achieve higher gains -- is reversed.
This means that experienced, older engineers willing to work hard have very little financial incentive to join an existing, early-stage startup: it is more rational to either found a startup (where they might have a 17% or 26% stake) or work at a top tech firm. This is one of the factors contributing to the explosion of startups and the lack of engineering talent (easy credit and lower barriers to entry are two other factors).
This fixed equity + fixed salary mechanism is both antiquated and arbitrary. Standard equity grants (with modest salary) may also mean that the payoff gap between founders and early employees can be significant enough to incentivize the founder to sell the company cheaply; promising young talent is then left with modest (often below-market) income, while the founder has earned millions of dollars per year.
As I pointed out, this is largely an artifact of the equity-salary trade-off. What if instead we designed a payment structure in which employees' income grew at a rate proportional to the growth of the company, but inversely proportional to the number of employees? (Note that high-level roles in the company would of course have multipliers attached to their positions.) We can easily accomplish this by recycling options: early employees receive larger grant amounts than is currently typical, but they are then required to underwrite option grants to later employees at pre-specified increments.
To simplify the discussion, let's pretend that employees arrive in increments, and that we know the valuation of the company when they come in: the first four come in at valuation $0, then four more come in at $20M, then sixteen more at $100M, and so on. It's easy to extend this to a continuous-time case, but these assumptions make the discussion easier.
Option recycling
An option-recycling scenario would look like this:
Here is what the early-employee compensation world currently looks like:
The shaded region represents an individual's contribution during that period. We're simplifying, since his contribution will often increase as he gains more seniority, but the basic idea is sound: his compensation from the company in any period should aim to be proportional to 1/(number of employees). The company will grow some amount, and an individual's share of the company will grow at an amount which is also proportional to his stake in the company -- and, if hiring is consistent -- proportional to his contributions to the company. Notice in particular that an early employee will have a much larger stake in the early performance of the company. This is compensation for the risk the employee is taking. In the original scenario, the employee was taking outsize risk for the first $20M in growth of the company. In the modified scenario, the employee's equity value is fairly high by the time it's ready to take on the 9th through 16th employees
You may wonder why the blue bars are stacked precariously. If you look carefully, you'll see that the top of each blue bar at the end of an epoch is aligned with the top of the blue bar in the next epoch. I illustrate it this way to show how an employee's cumulative contribution toward a company usually continues to grow, but at a rate which decrease with the number of employees. His increase in value-added (and compensation) during the entire period is the sum of his increments in each period -- the height of the small vertical bar on the right. Importantly, note that there are also no dark wages. This is because early employees held stock as it increased in value before passing it on to later employees.
This post piggybacks off of a post I'd made earlier about why people start startups. Aside from the personal satisfaction one can achieve from this scenario, there are clear financial motivations to found a startup instead of joining one. To summarize: the founders get a large equity stake, and successive employees take on much, much smaller equity stakes -- often one or two percent, even when the company is still very much trying to figure out its business model.
The traditional model is bad because it prices certain employees out of the startup market. In this post I'll summarize a typical scenario and then describe an alternative method which involves recycling options and which could make startups more attractive to strong, experienced employees. In the very last section I outline a very simple way for employees to decide when to take on a new coworker and how much they should offer to the coworker.
A typical scenario
In a very common scenario, a startup founder (or founders) will retain at least 50% of the company and offer small equity grants, say 1% to 3% -- plus a modest salary -- to early employees. These grants become meaningful if the company becomes worth $20M or more: a 2% grant may then be worth $100K per year if it vests over four years. With a salary of $60K, this is like getting $160K per year. Sounds good, right? Potentially, until you start looking at salaries at some of the top tech companies, which are regularly offering packages around $200K per year to candidates with experience. Even if the company sells for $100M (not bad for four years!), the equity grant is $560K/year. This sounds promising until you consider that $100M is the exception rather than the rule. If even 1/4 of these companies make it that high (and that's being optimistic), and the rest fail, the expected value is back down to $185K/year. And that's with high variance: the usual risk/volatility trade-off -- in which investors take on volatility to achieve higher gains -- is reversed.
This means that experienced, older engineers willing to work hard have very little financial incentive to join an existing, early-stage startup: it is more rational to either found a startup (where they might have a 17% or 26% stake) or work at a top tech firm. This is one of the factors contributing to the explosion of startups and the lack of engineering talent (easy credit and lower barriers to entry are two other factors).
This fixed equity + fixed salary mechanism is both antiquated and arbitrary. Standard equity grants (with modest salary) may also mean that the payoff gap between founders and early employees can be significant enough to incentivize the founder to sell the company cheaply; promising young talent is then left with modest (often below-market) income, while the founder has earned millions of dollars per year.
As I pointed out, this is largely an artifact of the equity-salary trade-off. What if instead we designed a payment structure in which employees' income grew at a rate proportional to the growth of the company, but inversely proportional to the number of employees? (Note that high-level roles in the company would of course have multipliers attached to their positions.) We can easily accomplish this by recycling options: early employees receive larger grant amounts than is currently typical, but they are then required to underwrite option grants to later employees at pre-specified increments.
To simplify the discussion, let's pretend that employees arrive in increments, and that we know the valuation of the company when they come in: the first four come in at valuation $0, then four more come in at $20M, then sixteen more at $100M, and so on. It's easy to extend this to a continuous-time case, but these assumptions make the discussion easier.
Option recycling
An option-recycling scenario would look like this:
- Employees one through four each get call options representing 8% stakes in the company with a strike price at the most-recent valuation. This high stake is mitigated by each employee's contractually obligated short position on call options representing a 4% stake with strike price based on a $20M valuation, and further short positions on call options representing a 2% stake at the even higher valuation $100M, et cetera.
- Employees five through eight each receive call options representing 4% stakes at $20M valuations, and they're required to hold short positions on call options representing a 2% stake at $100M, 1% at $500M, etc.
- This means several things:
- Early employees will lock in higher earnings sooner, with lower risk of the investors selling the company for talent. An early employee with an 8% stake in a fledgling startup has enough of a stake to reasonably appreciate a $20M acquisition: it's $1.6M total, or $400K amortized over four years -- higher than the vast majority of strong individual contributors will receive at a large top tech company. This contrasts with a 2% stake at a $20M valuation, which is $100K/year -- lower than an individual contributor will receive at top tech firms. Is this hedging of risk a bad thing for individual contributors? Not if you consider that the founder has a significantly larger cushion.
- Early employees will receive lower payouts for later company growth. An early employee's income per dollar valuation decreases as the company grows larger. If the company is worth $40M, one of the first employees will find that his income from options is (8% * $40M - 4% ($40M - $20M) ) = $600K per year (over four years). This increase of "only" $200K makes sense because there are more employees at this later stage of growth. It's not fair for early employees to take credit for later employees' contributions, unless it's been priced in based on a special role these early employees now play (manager, director, CTO, president, etc.). These special roles should of course be priced in by refresher grants and salary, but fixed equity stakes at the beginning make no sense.
- The first four employees (after the founders)-- who together might reasonably explain 32% of the first $20M of growth -- each get an appropriate share of that growth as well as an adjusted share of future growth.
Here is what the early-employee compensation world currently looks like:
In the plot above I show the value of a company as a function of time. The employee equity pool (the medium-curvy line) is a fixed fraction over time of the company value (the higher curvy line). The first employee's equity as a share of the company is the lowest curvy line.
The total of employees' equity at the end is represented by the vertical bars stacked on the right side, the total length of which corresponds exactly to the options' above-water value. Note that this sum is not the same as the value of the assets underlying the employee stock option pool because later employees received options with a higher strike price: Say one of two employees gets 1 option at strike price $0 and the other gets 1 option at strike price $20; then if the company is valued at $50 per share, the amount the two employees receive from options is $30 + $50 = $80, not $100. We could call the gap between the value of the assets underlying the options minus the above-water value of the options (i.e. the height of the employee medium-curvy line minus the sum of the lengths of the bars) the dark wages because it was never assigned to employees in the first place; it was essentially value contributed by early employees that was never paid back to them.
Founders and investors are loath to grant larger amounts to early employees because they typically assume that once an option is granted it can't be re-granted. Recall that we solved that problem by having earlier employees recycle their option grants to later employees. The early-employee compensation world now looks more like this:
You may wonder why the blue bars are stacked precariously. If you look carefully, you'll see that the top of each blue bar at the end of an epoch is aligned with the top of the blue bar in the next epoch. I illustrate it this way to show how an employee's cumulative contribution toward a company usually continues to grow, but at a rate which decrease with the number of employees. His increase in value-added (and compensation) during the entire period is the sum of his increments in each period -- the height of the small vertical bar on the right. Importantly, note that there are also no dark wages. This is because early employees held stock as it increased in value before passing it on to later employees.
Valuing an offer with recycled options
We haven't yet discussed how to estimate the strike prices and amounts of employees' short-call positions. Valuation of these is an open question, but it is not conceptually any harder than valuation of employee stock options. The only additional variables it requires estimating are the size of the company (in terms of employees) at future valuations. Plenty of training data exists for this, and humans will be able to make educated adjustments to model typical outcomes.
A further thing to note is that offering employees equity which has higher value at earlier stages in the company does not require that the employees be able to cash out early. Instead, they are simply underwriting options for later employees.
Giving employees the option of when to hire (and when to split their equity)
A final model I'll propose is that employees be given the option of splitting their options to take on another employee.
At any time, the current employees (say there are N of them) will be given the option of bringing on more talent by offering a stake of their equity. There will be a point at which this should be not only rational but attractive: as the company progresses with a fixed number of employees, they will reach a local optimum of the company's valuation, at which point their options cannot increase in value. At this time, it is completely rational for an employee to realize that his X% stake cannot grow any further without bringing on another employee. If each employee gives X% * / (N+1) of his stake as call options to an incoming employee (who then has the same amount as each old employee, but with a higher strike price), then the original employee's remaining N/(N+1) * X% can continue to grow. Again, each employee gets credit for growth he has contributed to, but no more.
A further thing to note is that offering employees equity which has higher value at earlier stages in the company does not require that the employees be able to cash out early. Instead, they are simply underwriting options for later employees.
Giving employees the option of when to hire (and when to split their equity)
A final model I'll propose is that employees be given the option of splitting their options to take on another employee.
At any time, the current employees (say there are N of them) will be given the option of bringing on more talent by offering a stake of their equity. There will be a point at which this should be not only rational but attractive: as the company progresses with a fixed number of employees, they will reach a local optimum of the company's valuation, at which point their options cannot increase in value. At this time, it is completely rational for an employee to realize that his X% stake cannot grow any further without bringing on another employee. If each employee gives X% * / (N+1) of his stake as call options to an incoming employee (who then has the same amount as each old employee, but with a higher strike price), then the original employee's remaining N/(N+1) * X% can continue to grow. Again, each employee gets credit for growth he has contributed to, but no more.
This is similar to the above analogy, but it gives an employee control over his equity stake. As the company continues to grow, employees (and founders and investors) can make such decisions completely rationally.
Sunday, March 25, 2012
Chaotic Computing and Bogus DARPA programs
Am I the only one a bit suspicious of lots of buzzwords in DARPA solicitations? From a current solicitation: "Utilize the concept of nonlinear dynamical chaos to create a chaos-based computer for a novel approach to information processing that has the potential to be superior to existing technologies in finding optimal solutions for problem solving and decision-making."
What?
So I looked up chaos computing on Wikipedia and found a research community suspiciously centered around a single fellow named "Ditto", CTO of ChaoLogix. It's unclear what they're really working on, despite their commercialization description.
I'd be in favor of a way to bid-down program solicitations so garbage like this doesn't get funded.
Thursday, February 16, 2012
Really? And why is this not in the editorial section? Aggressive Acts by Iran Signal Pressure on its Leadership.
Tuesday, November 08, 2011
Subscribe to:
Posts (Atom)