The “Rule of 70” – Why Does it Make Sense?

Today I was doing my Economics homework when I came across a section about the “Rule of 70.” The rule states that for a value that has a growth rate of n%, the number of years for the value to double is approximately 70 divided by n.

This was curious to me. Usually, rules like these based on “coincidence” should have some form of mathematical basis! So why 70, of all numbers?

First, I assumed that the rate of growth was x% per year. The final value V of a beginning value P after t years is V=P(1+\frac{x}{100})^t. Solving for t when V=2P (double the initial):

P(1+x/100)^t=2P

\log_{1+\frac{x}{100}}(2)=t

            Using logarithmic base conversion,

t=\frac{\ln2}{\ln(1+\frac{x}{100})}

regular_graph

            The above plot was made with Wolfram Alpha.

Now, let’s say we want to shortcut this expression using a function with similar qualitative properties. What function is easy to use and looks similar to this? That’s right, rational functions of degree -1. In this case, we can use the generic function y=\frac{q}{x}, where q is a number we shall shortly solve for, since it is also asymptotic to both the x and y axes.

The thing is, how do we determine what q is? We can’t just solve it out, since that won’t work out nicely. We can’t take the limit of q to infinity with respect to x, since the error deviates a lot. (Even look at the example of 100% interest – with one method, we get 1 year, and with the other, we get 0.7 year.) But we know that the rule holds pretty well for small numbers, like x=2. So why not take the limit of q to zero with respect to x?

Thus, using some L’Hopital’s rule and other clever devices,

\frac{\ln2}{\ln(1+\frac{x}{100})}=\frac{q}{x}

q=\frac{x\ln2}{\ln(1+\frac{x}{100})}

\lim\limits_{x\to0}q=\lim\limits_{x\to0}\frac{x\ln2}{\ln(1+\frac{x}{100})}=\lim\limits_{x\to0}\frac{\ln2}{\frac{\frac{1}{100}}{1+\frac{x}{100}}}=\lim\limits_{x\to0}\frac{\ln2\cdot\left(1+\frac{x}{100}\right)}{\frac{1}{100}}=\lim\limits_{x\to0}\left(100\ln2\left(1+\frac{x}{100}\right)\right)

            And this, of course, is equal to 100ln2, which is about 69.315, which is about 70!!!!!! So it all makes sense now.

A comparison of the “rigorous” function and the “estimate” function:

compare_graph

And finally, a percent-error graph for values 0 to 100:

error_analysis

But yeah, totally awesome! xD

…And More Calculator Stuff! (Shortcuts FTW)

Well, at least this time it’s math-related.

Basically, after a lot of hard work at the calculator, I discovered the following solutions to huge problems I used to have.

Finding the values of all variables – Just type the following into your (TI-83 or TI-84) calculator:

 seq(expr(sub("ABCDEFGHIJKLMNOPQRSTUVWXYZθ",X,1)),X,1,27)

This gives you a list with 27 elements, which are your variables neatly placed in order.

Converting Binary to Decimal (Without a program) – Just type this into the home screen. Replace number with the string of 0’s and 1’s that make up your binary number. (KEEP THE QUOTATION MARKS!!!! example: “101010”)

 "number":sum(expr(sub(ans,X,1))*2^(length(ans-X),X,1,length(ans)))

Eureka! Now you can convert binary to decimal.

Vector Functionality – Believe it or not, your calculator came with vector functionality! Let L1 and L2 be your vectors, both in the form {Vx,Vy,Vz,...} . Then the following commands will help you out:

sum(L1*L2) #The dot product.
√(sum(L1^2)) #The magnitude of a vector.
L1 - L2 #The difference vector between L1 and L2
L1/√(sum(L1^2)) #The unit vector in the direction of L1

Yaaaaay! ^_^

Not really math… but drawing is fun!

Drawing on TI-84 calculators is a fun and questionably productive way to blow off steam.Click on the picture to see the gif. Yay.

I had to make a less detailed drawing this time because otherwise the gif file would be colossal. But now that I think about it I probably screwed up somewhere in-between – maybe when I edited the .gif in MovieMaker? 😛

yay

Sorry for the terrible quality xD, can someone tell me how to do this right?

Exploring Binary Fractions: What are the Digits?

When looking into this problem, I was initially confused by the “cycles” of remainders when long division is used (until I realized that the cycle is circumscribed by the size of the number). In retrospect, this activity seems kind of unnecessary, but since I spent time on it, I’ll post it here for readers to understand. And stuff.

So to answer this question, I first proposed a statement to prove:

\forall\ i\in\mathbb{P}, \exists \ n\in\mathbb{Z}^{+}\ s.t. \ 10^n\equiv 1,0 \mod i

The reason I chose this problem is because of the phenomenon that fractions brought into the form \frac{x}{10^n} turn out terminating, while fractions that can be brought into the form \frac{x}{10^n-1} end up repeating.

Let’s take a look at this problem for 7. 10^1 \equiv 3, 10^2 \equiv 2, 10^3 \equiv 6, 10^4 \equiv 4, 10^5 \equiv 5, 10^6 \equiv 1 \mod 7. Notice how after 10^6, the pattern repeats itself once again, with 10^7 \equiv 3 \mod 7.

Doing the same for 17 gives the following cycle of values: 10, 15, 14, 4, 6, 9, 5, 16, 7, 2, 3, 13, 11, 8, 12, 1. 16 values. That’s 17-1.

The same for 19: 10, 5, 12, 6, 3, 11, 15, 17, 18, 9, 14, 7, 13, 16, 8, 4, 2, 1. 18 values.

The same for 23: 10, 8, 11, 18, 19, 6, 14, 2, 20, 16, 22, 13, 15, 12, 5, 4, 17, 9, 21, 3, 7, 1. That’s 22 values.

In fact, this phenomenon is true for many numbers that are coprime to the base. I’ll let you think about that one in your free time.

In the meanwhile, I plan to submit these sequences to OEIS; the 17 one is already online. These sequences have been called “remainder orbits” among other names. Apparently, there exists a function called the “Carmichael function” that states that \lambda (n) is the smallest integer m such that a^m \equiv 1 \mod n. The Carmichael function where a = b gives the length of the period, because of this phenomenon.

<< Part 4: Repeating After the Point

Part 6 >>

Exploring Binary Fractions: Repeating After the Point

Life isn’t perfect. Neither is the binary system. Or the decimal system. Or any other practical base system.

One element of investigation is repeating binaries, which are numbers that do not terminate when expressed in binary. To understand them better, let’s take a look at the properties of repeating numbers in decimal.

Suppose we had the number,  0.\bar1. This, we know, is equal to \frac{1}{9}. But the big question, as always, is WHY. Why does the repeating decimal have a “nice” value – a ratio of integers?

Let’s take a look at this example, 0.\bar1. We can express the number as a summation:  S(n)=1\cdot\left(\frac{1}{10}\right)+1\cdot\left(\frac{1}{10}\right)^2+1\cdot\left(\frac{1}{10}\right)^3+\dots. This summation – an infinite series – can be evaluated using S(n) = \frac{a}{1-r} = \frac{1/10}{1-1/10} = \frac{1}{9}. Eureka!

But observant readers may notice that this explanation is completely unsatisfactory in answering the question of why every ratio of integers can be expressed as either a terminating or repeating decimal. Think about it. It’s not trivial – is it just a convenience that any ratio of integers can be expressed “nicely” in our number system?

The answer here may be unsatisfactory, but here is an explanation that will hopefully be plausible. You know long division? That stuff should apply for every integer base. So it should work for binary too. But first, let’s talk about the terminating numbers. Now suppose you are dividing by a “nice” number  – say, a power of 2 and/or 5 multiplied by an integer for base 10 – that has common factors with your base. These should always lead to terminating decimals. To prove this, suppose you had base b and the fraction \frac{c}{d}, where c \in \mathbb{Z} and d\in\left\{i_1^{p_1}\cdot i_2^{p_2}\cdot i_3^{p_3}\cdot i_4^{p_4}\dots\mid i\leq b,i\equiv 0\mod b,p\in\mathbb{Z}\right\}. (Basically that’s the fancy way of saying “d is nice.”) Now, if the fraction is multiplied by a sufficient power of b, then the fraction \frac{c}{d} becomes an integer! Then, the base-b “decimal point” can be moved back the appropriate number of times – the power of b used to manipulate the number.

But what about repeating fractions? Think about long division. Say I’m dividing, say by 29, for example. This has a repeating unit (“period,” in number theory terms) of length 28. As you long-divide, there’s a point where you’re encountering zeros and carrying over values. These values can only range from 0 to 28, since they are remainders. Once that value hits 0, the number terminates. If it never hits 0, then it does not. If it hits a previously-hit number, then it starts a cycle that repeats itself. (Try it yourself!)

To look at the problem of what those digits are, move to the next section! And beyond!

<< Part 3: Terminating Binaries

Part 5: What Are the Digits? >>

Exploring Binary Fractions: Terminating Binaries

Why should you care about terminating binaries? Because, IDK, computers? Computers use binary? Or something like that.

But that’s beside the point. (No pun intended.)

Now let’s say I’m a computer. In computer science class a while back, I learned that there is a “floating point” representation that uses binary to represent decimals. Some decimals can be represented accurately in binary, which is nice. But some can’t.

How does the “float” type work? Let’s take a look at the IEEE 754-2008 documentation for a floating point number (available at www.math.fsu.edu/~gallivan/courses/…/IEEE-fpstandard-2008.pdf.gz):

IEEE_754_2008

So how does a 32-bit number work? Here are some definitions:

  • Segment “S” is known as the “sign.” It’s exactly what you think it is. One for negative, zero for positive.
  • Segment “E” is known as the “exponent.” This is an 8-bit unsigned expression for 32-bit numbers, and represents the power of two to multiply the following number by. (For the super-nerdy: a bias of 127, or 01111111, is added to the exponent so that negative exponents can be expressed as well).
  • Segment “T” is known as the “mantissa.” The mantissa is normalized – that is, in truncated scientific notation. Example: Take the binary number 10.001101. This is expressed as 1.0001101 \cdot 10^1. Then, this is truncated to 0001101 to form the mantissa.

So here’s an example: how do you express the number 12.375_{10} in binary, floating-point style?

  1. Find out how to express 12.375 in base 2. It’s relatively easy in this case: 8 + 4 + 0.25 + 0.125, or 2^3 + 2^2 + 2^{-2} + 2 ^ {-3} = 1100.011_2
  2. Convert this to base-2 scientific notation: 1100.011_2 = 1.100011 \cdot 10 ^ {11}
  3. Truncate that.
  4. Now you have all your information! Sign 0, exponent 11, and mantissa 100011.

Play around with http://www.easysurf.cc/fracton2.htm in your free time. If you notice repeating decimals, you know where to look: Part 4!

<< Part 2: How Binaries Work

Part 4: Yum, Repeating Binaries >>

Exploring Binary Fractions: What are Binaries?

Now that you’re a master at nonnegative integers in binary, let’s extend the amount we can express. What if we wanted to find a way to write any nonnegative rational number?!

I’m too lazy to scare you with fancy notation, so I’ll explain it in words instead ^_^

You know how, for integers, you go from left to right and multiply by increasing powers of whatever base you’re in? Now go to the right of the binary point, and multiply by increasing negative powers of what base you’re in. Nothing illustrates like an example:

Convert 101.101_2 to decimal.

Solution: 101.101_2 = 2^2 + 2^0 + 2^{-1} + 2^{-3} = 5.75

Please reply in the comments what I can do to improve this post! More examples? Better explanations? I want to do whatever is possible to be a better communicator.

Click onwards to Part 3, to learn more about binaries – specifically, terminating binaries.

<< Part 1: lol, what are binary numbers? xD

Part 3: Terminating Binaries >>

Exploring Binary Fractions: Intro to Binary

If you’re familiar with the binary system already, go ahead and skip this post. But if you’re not, this post is a great way to understand the interesting problem at hand. You won’t regret reading it.

How the Binary System Works (Don’t worry, you can skip this if you know this already)

In Decimal, we have a convenient system for expressing numbers. The numbers 0-9 express what they want to. When we have a two-digit number, the first digit (1-9) multiplied by 10, added to the second digit (0-9) expresses the value we want to communicate. In general, we can say (in base 10) digit i of number x = a_n a_{n-1} \ldots a_2 a_1 represents the value 10^{i-1}; the total value of the number is x = \sum\limits_{i = 1}^{n} {a_i \cdot 10^{i-1}}. Generalizing, we can make the following relationship between notation and value in any normal base:

given \ \{ a \in \mathbb{N}^0| a < b\},\ x_b = (a_n a_{n-1} \ldots a_2 a_1)_b = \sum\limits_{i = 1}^{n} {a_i \cdot b^{i-1}}

The set of all numbers that can be written this way has a one-to-one correspondence with the set of all numbers in \mathbb{N}^0 of any base b \in \mathbb{Z}^{+} (don’t even let me get into non-integer bases xD).

An Example

Convert 123_4 to decimal.

Solution: 123_4 = 1 \cdot 4^2 + 2 \cdot 4^1 + 3 \cdot 4^0 = (16 + 8 + 3)_{10} = 27_{10}

Convert 10_{10} to binary.

Solution: We know that 10 is equal to 8, or 2^3, plus 2. So it’s 1010_2.

This was probably really alienating if you haven’t seen binary numbers before. Here are some links for understanding binary better:

http://www.codeconquest.com/code-tutorials/binary-tutorial/

http://www.mathsisfun.com/binary-number-system.html

Onward to Part 2, to find out how binaries work!

<< What are Binary Rational Numbers?

Part 2: How Binaries Work >>

Exploring Binary Fractions: What are Those?!?!

In the decimal system, we have numbers that can be expressed as integers – without decimal points. But we also have some numbers that require a decimal point when written out; we call these numbers “decimals.” As in, “hey you, can you write out \frac{1}{2} as a decimal?” “Why of course, it’s 0.5!”

Now, what if I said that non-integer rational numbers exist in binary as well? Now, let’s have some terminology that’s more accurate and makes things easier for us:

Binaries – Like “decimals,” they’re numbers that cannot be written out without the “dot.”

Binary Points – Like “decimal points” – the dot.

In this thread, we’re going to talk about all sorts of fun stuff. So follow to Part 1 on the Binary System!

<< What are Binary Rational Numbers?

Part 1: Intro to Binary 101 >>

Hair Length: Conclusions (and Math!)

This is the final part of the series on hair length.

Remember the super-large-scale graph that gave a smooth curve? Let’s take a look again:

Experiment1_pyplot

Isn’t that just beautiful? What amazes me is how consistently the graph forms a recognizable exponential curve. This suggests that the average growth of hair follows an exponential model. This gives rise to several conclusions:

  1. Hair, on average, grows faster when cut. That’s why people say that “shaving makes your hair grow faster.”
  2. This effect can be achieved through a simple model of growth that makes two basic assumptions: individual hairs grow at the same rate as their neighbors, and hairs fall out at a predictable rate.
  3. Because of this, different hairs can be at different lengths while keeping the average length converge to an “equilibrium.”
  4. The two main (and constant) factors in determining the length at which hair “knows to stop growing” are the falling-out frequency and the growth rate.
  5. Lists of five are important for creating convincing arguments.

Now, here’s the big question you’ve all been waiting for:

How can we tell what the “maximum” length is, just from our two constant factors: falling-out frequency and growth rate?

This is where math enters the equation! First, let’s make some definitions: we will let p be the probability of hairs falling out (in hairs per unit of time), r be the growth rate in length per unit of time, n be the number of hairs, and \bar x be the average length of all hairs.

So first we’ll start out with a differential equation; we know that the rate of change in hair length is equal to the rate of growth minus the rate of “loss” in the form of dropped hairs:

\frac{d \bar x} {dt}=r-loss

But what is the expression for “loss”? Using probability, p \cdot \bar n is the expected number of hairs to be dropped completely. Multiplying this by length, the expected total length lost after a unit of time is p \cdot n \cdot \bar x. The average length dropped, then, is this divided by n: \frac{p \cdot n \cdot \bar x}{n} = p \cdot x. Thus, our differential equation becomes:

\frac{d \bar x} {dt} = r - p \cdot \bar x

Now, let’s get serious and solve this!

\begin{array}{rcl}  \frac{d \bar x}{r-p\cdot x} &=& dt\\\\ -\frac{1}{p}\ln{|r - p \cdot \bar x|} & = & t + C \\\\ ln{|r - p \cdot \bar x|} & = & -p \cdot (t + C) \\\\ r - p \cdot \bar x &=& e^{-p \cdot (t + C)} \\\\ \bar x &=& \frac {r- e ^ {-p \cdot (t + C)}}{p}  \end{array}

Now, let’s assume that C = 0; in that case, we have the following:

CodeCogsEqn

You know what this means?! This means that the maximum average hair length approaches to the rate of hair growth divided by the rate of hair loss! Wow!

Notice this with the super-large-scale graph: with hairs growing at 1 unit/iteration, and hairs falling with a chance of 0.01/iteration, we use 1/0.01 = 100 to determine the final maximum length!

Using Real Data:

According to Yahoo Answers, head hair grows at 150 mm/year, and each hair lasts 2 to 6 years. Thus, it’s safe to say that, per year, the probability of a hair falling out is 1/6 to 1/2. Using our formula, the upper limit for head hair is 300 to 900 mm, or about 1 to 3 feet. Seems reasonable!

That’s all, folks! Hope you had just as much fun reading this as I had writing this!

<< Part 3: Large-Scale Modeling

Part 4: Conclusions