To the experienced reader:

“One of the major objectives in modern algebraic geometry is to build an object that is analogous to manifolds, i.e. topological spaces which are locally euclidean, by replacing locally euclidean to locally affine…”

(for the pleasure of the reader we give a pretty picture of a manifold, shamelessly stolen from Google, to start out this long ramblining blog post)

To me “modern” algebraic geometry has been shaped by the Zariski topology on the lattice of prime ideals. Personally I HATE how it is treated so clinically by the popular texts and notes available on-line and other settings, this will be my attempt to convince the reader that it is as natural as any other toplogical space you have been building intuition on for all of your career.

To an undergraduate in mathematics or even a first year graduate student, these words may still have a mystic power, to slay these evil wizards, lets begin with an example.

In most modern algebraic geometry text, the authors like to begin with the example of polynomials either over the real or complex numbers, due to historical reasons, or I think, in attempts to win the reader over to the “geometry” of Algebraic Geometry. Instead lets consider the ring of the Integers, the first setting in which a student is introduced to the concept of “prime”, and instead of pushing the “geometry” of algebraic geometry I would like to sell the topology or calculus of algebraic geometry. If you were introduced to this ring as an undergraduate in a modern algebra class, this would have been introduced to you as the set of integers along with two binary operations

which satisfy certain axioms, the same “rules” we all learn as little children in between recess and singing songs, we then learn the concept of what it means for a number to be prime. We all know and love the prime numbers. In ring theory the concept of prime is lifted from a property of elements to a property of ideals.

The prime ideals of the integers are those generated by prime numbers, or zero, we use the notation (2), for example to indicate all integers that have 2 as a mulitple [you may call these the even numbers]. We can build a lattice by using set inclusion as a partial ordering on the prime ideals. To remind the reader partial ordering is exactly what you would think:

This is a cool way to indroduce partial ordering, since the first time we were introduced to something having “order” it was the integers, and we were shown in kindergarten that these are best visulaized on a line, but this is not an order it’s a partial order, so what would be the next natural way to look at this “new line”…

A lattice

We think of the “smaller” objects on the bottom and the “larger” objects on the top, I like this description but to make it look more like the line for all of our inner-kindergardener we should draw this

“left-to-right”

So this is our ** “Ideal Space”** the lines are just there so we can picture the “ordering”, and as is common in “spaces” we consider the prime ideals as our

After growing up a little bit we begin to teach the topology of the line, that is open and closed intervals

The student that is still awake will recall that we can just define one of these concepts (i.e. closed or open) and we get the other for free, so the big deal about Zariski Topology is we begin by defining the closed sets, but to keep this “line” analogy rolling, we know to define a closed interval we pick two points, in the picture above that is “a” and “b”…

In the Zariski topology to build what is analogous to closed interval we start with a single “point” then collect everything that’s larger than it, then the open sets are just the compliments (everything but) of these “closed intervals”

For a general closed set we take finite unions and arbitrary intersections, and again the open sets are the compliments…

In the next blog post we will discuss a “basis” for this topology…

]]>I leave the reader with an exercise:

“What do the open and closed sets look like in the integers, and how about in the ring of polynomials in one variable over the real numbers. To help with the latter we give you the picture for this”

An entertaining history of the first scientists who struggled to answer the question of how, exactly, humans create new life.

——–

Despite tremendous advances in science and exploration, by the dawn of the modern age people still did not know where babies came from. Many agreed on the basics-that men and women had sex and as a result, sometimes, babies-but beyond this consensus ended. Unlocking this secret took centuries of ludicrous missteps and bungled discoveries, impelled by the extraordinary drive and hubris of some of humanity’s greatest scientists.

The Seeds of Life is a remarkable history of the scientists who struggled mightily to crack the code of human conception and to prove how and where babies come from. Taking a page from investigative thrillers, acclaimed science writer Edward Dolnick looks to each of these blundering geniuses and brilliant amateurs as if they were detectives hot on the trail of a bedeviling and urgent mystery. The action jumps from England and Holland in the 1600s-during a golden age of anatomy and dissection-onward to France and Italy in the 1700s-where a great battle is waged over whether sperm or egg is the dominant element-finally landing in Germany in the 1800s-when microscopes finally, after centuries of speculation, allowed scientists to actually witness fertilization.

The protagonists on this centuries-long quest are remarkable characters in their own right: the list includes Leonardo da Vinci, creator of fantastic anatomical drawings but fanatically disgusted by the sexual act; William Harvey, celebrity scientist who dissected countless rutting deer without ever finding the first signs of conception; Antony van Leewenhoek, a microscope pioneer and self-taught scientist, who introduced the world to bacteria and other organisms too small for the naked eye-and yet mistook the sperm cells he discovered as mere parasites; and Lazzaro Spallanzi, an enquiring Catholic priest who designed ingenious miniature boxer-shorts to prove that frogs required semen to fertilize their eggs.

We live in a world where many of the mysteries of human existence have been carefully explained and catalogued; Dolnick’s The Seeds of Life offers readers a chance to recapture the wonder and awe of discovery. A witty and rousing history of science, The Seeds of Life presents our greatest scientists struggling-against their perceptions, their religious beliefs, and their deep-seated prejudices-to uncover the deepest mysteries of life.

——–

Available June 06, 2017 from Hachette Audio as a digital download, Hardcover and eBook from Basic Books.

Download:

http://ift.tt/2sVK0p2

http://ift.tt/2tTdnFK

http://ift.tt/2sVAr9B

http://ift.tt/2tTdRvg

http://ift.tt/2sVzE8t

http://ift.tt/2tTmeXM

http://ift.tt/2sV7MRR

Check out our other great titles and more at:

http://ift.tt/1Evch0I

Follow us at:

]]>- “The 500-page proof that only one mathematician can understand“, by Michael Byrne (Motherboard). An article on the state of the abc conjecture. Naturally slips down the slope of the “proof — for whom?” question.
- “Why a top mathematician has joined Emmanuel Macron’s revolution“, by Elisabeth Pain (Science). Fields medalist Cédric Villani leaves his post as director of the Institut Henri Poincaré to represent En Marche! in France’s National Assembly.
- “What is a generalised mean-curvature flow?“, by Hui Yu (AMS). In case you’re ever stopped on the street and asked such a question.
- “What causes a fever?“, by Peter Nalin (Scientific American). Answer: Often, the hypothalamus responding to pyrogens. That is, your immune system fighting off intruders.
- “French sign Reich truce, Rome Pact next“, by Guido Enderis (NY Times, 22 June 1940).

**Learning how to write fractions in their simplest form is essential for math class and real-world scenarios. Give kids the practice they need to master this skill, which is a building block for the addition, subtraction, multiplication, and division of fractions.**

*Are you ready to play? *

- Playing cards (without face cards)
- Pencils
- Paper
- Players (an even number)

- Draw a line through the center of a piece of paper, cutting it in half horizontally. This will act as a fraction bar, separating the numerator and the denominator during the game. Create one of these for every two players.
- Have each pair of players face each other.
- Shuffle and deal out the cards evenly between the players. Each person should place their cards in front of them, face down.
- Begin the game by having every player turn a card from their stacks face up simultaneously, and place it on the paper in front of them. The cards should be placed above the fraction bar, to represent the numerator.
- Players should then repeat the process with a second card, placing it below the fraction bar to represent the denominator.
- For every two players, there should be two cards above and two cards below the fraction bar, for a total of four cards.
- The first person to successfully simplify the fraction shown in front of them wins all of the cards. If both players simplify the fraction at the same time, they split the cards.
- If the fraction can’t be simplified, have each player take the cards that the other player laid down and put them at the bottom of his deck.
- When one player has collected all of the cards, the game is over and they’ve won!

**To shorten game length, you can set a time limit on the game.*

Education.com aims to empower parents, teachers, and homeschoolers to help their children build essential skills and excel. With over 12 million members, Education.com provides educators of all kinds with high-quality learning resources, including worksheets, lesson plans, digital games, an online guided learning platform, and more.

**Check it out!**

Till next time…

]]>Solving exponential quantity has always been cumbersome, it is that field where intuitive calculations generally fail. I was taught in school how to use log tables to solve exponential terms, complex multiplications and divisions. For multiplication, logarithm of quantities were added, for division, these were subtracted and for solving quantity with power, power was multiplied with logarithm of the quantity. Explanation of terms exponent and mantissa and use of tables were considered sufficient for solving numerical at that time.

In earlier article titled, “Good Bye To Log Table And Calculator! Determining Logarithm Is Easy Now,” available at https://narinderkw.wordpress.com/2017/06/16/goodbye-to-log-table-and-calculator-determining-logarithm-is-easy-now/ I presented detailed discussion on how to determine logarithm of a quantity without any help.

But today, with God’s blessing, I will discuss how on knowing the logarithm, one can find the quantity. In other words how anti logarithm can be determined without use of calculators and log tables.

*Overview Of Determining Logarithm
*

In brief, I will overview what I discussed in my earlier article particularly for those who did not peruse it. However, I strongly recommend that readers may go through that article also.

What is logarithm of a quantity and what is anti logarithm? A hundred can be written as 10 multiplied by 10, there are two ten, first ten is multiplied with second ten. Number of times, ten is being multiplied, is called its power. In other words, I can say, 100 has power 2 and 100 can also be stated as 10 raised to the power two. 10000 can be stated as 10 raised to power 4. This power of ten is the logarithm (to the base 10) of the given number. Therefore log 1000 (to base 10) is equal to 3. But to find logarithm of 25 to the base 10, one will have to use the formula.

*Changing Base Of Logarithm
*

I further submit that base needs not necessarily be ten, it may be any other quantity also. It can be 5, 2 or natural number “e”. If base is five then we will have to consider power of 5, ie log 25 (to the base 5) is 2. Generally, in our arithmetic calculations, base is taken as 10 universally. If base is not mentioned, it is assumed that base is ten. For example log 100 means, log 100 to the base 10 and is equal to 2.

But in scientific calculations, base is taken as natural number ‘e’ and logarithm of 100 to the base e is written as ln 100 where ln means logarithm to the base e or natural logarithm. For your kind knowledge, ln 100 is not equal to 2 since base here is not 10. But we can always change base from one quantity to the other by simple base changing formula.

(*Log a to the base b) x (log b to the base c) = log a to the base c……(1)
*

Also (*log a to the base 10) x (log 10 to the base e)= log a to the base e………………….(2).
*

Above formulas are called base changing formula.

It can be memorised by considering log a to the base b as a/b.

Applying this to equation (1),

we get a/b x b/c = a/c

and applying to equation (2),

we get a/10 . 10/e = (a/e)

*What is natural number “e?”
*

Natural number e is the sum of infinite convergent series

*e = 1 + 1/1!+ ½! + 1/3! + …………….1/n! +………… up to infinity.
*

On addition of a few terms, its value up to three decimal place is 2.718.

To determine log of a quantity “n” to the base e or ln n, power of e which makes it equal to n will have to be found out. Let us consider this ln n as equal to y and it can also be written as

e^y = n where sign ^ denotes raised to power.

e^y can be expanded in power series as

e^y = 1+ y/1! + y^2/2! + y^3/3! + y^4/4! +…………..up to infinity.

Since e^y = n, we can write as

*n = 1 + y/1! + y^2/2! + y^3/3! + y^4/4! +…………..up to infinity………(3)
*

If y is made so small that terms y^4/4!,………so on can be neglected, then

n = 1 + y/1! + y^2/2! + y^3/3!.

Alternatively, ln n can also be found from expansion of ln (1+x)

Ln (1 + x) = x -x^2/2 + 1/3.x^3 -1/4.x^4 +……….up to infinity

If l is the logarithm then ln (1+x) = l. Ignoring cube and higher power of x, then

l = x -x^2/2 or

x^2 -2x +2.l = 0.

On solving, x = 1+ (1 -2.l)^1/2 or 1 – (1 -2.l)^1/2…………(5)

Antilogarithm of quantum l is then equal to 1+x.

Anti Logarithm

Anti logarithm means finding the quantity for which logarithm is already known. That is power to the base 10 is known but the quantity is to be determined. It is reverse of logarithm where power is determined but in antilogarithm quantity is calculated from power.

I take an example where logarithm of a quantity to the base 10 is 2.73. Then the quantity will be 10^2.73 or 10^(2 .73) or (10^2). (10^.73). Here 2 is exponent and .73 is mantissa. We know the value of 10^2 equals 100 but do not know what 10^.73 equals to. That is the objective of this article how 10^.73 (or simply antilog of .73) can be determined.

Let us say, anti logarithm of .00003 is to be found out. This can be written as (10^-4).(.3) or 4 bar plus .3 or -3.7. In this example, exponent is 4 bar or minus 4 and mantissa is .3.

Another example is of logarithm -2.6, it can also be written as 3 bar and .4 or (10^-3).(10^.4). Kindly bear in mind that exponent can be negative (or in bar) but mantissa can never be negative. If it is negative, it will have to transformed to positive as was done for -3.7.

It is worth noting that log tables give values of logarithms and antilogarithms to the base 10 but in scientific calculations, base is taken as e, therefore that requires base change. For changing base from 10 to e, it is multiplied by 2.303 where 2.303 is log 10 to base e.

*Error And Its Reduction
*

If mantissa is very small and close to zero, ignoring power above cube of logarithm introduces negligible error. But if mantissa is large, it can cause appreciable error. To reduce it, mantissa can be split in two or more parts. Antilog of each part is calculated and then multiplied. For example, if mantissa is .456, it can be split up as .152 + .152 + .152 and antilog of .152 is determined and this antilog is multiplied three times.It can also be split up as .15 + . 15 + .156 but antilog of each part must be multiplied.

*Example 1
*

Let us say, antilog of 2.22 is to be found. That means value of 10^2.22 or (10^2).(10^.22) is to be found out. The part before decimal, here it is 2, is called exponent and will be involved in calculations in the final result.

But mantissa which is .22 to the base is 10 requires conversion to base e by multiplying it with 2.303. Therefore, mantissa (to the base e) is .22 x 2.303 = .50666 and for accuracy, it is split up as .25333 + .25333. On putting this value in equation (3), we get

n = 1 + .25333 + (.25333)^2/2! + (.25333)^3/3!+……….

= 1 + .25333 + .06417/2 + .01625/3+….

= 1 .25333 + .03208 + .00272…

= 1.28813.

Antilog of .50666 = 1.28813 x 1.28813 = 1.6593

Actual value from log table 1660.

Error is – .04 percent.

Therefore antilog of 2.22 is (10^2)x (1.6593) = 165.93. Factor 10^2 corresponds to exponent 2.

*Example 2
*

Let us calculate antilog of .00007. This can be written (10^-4) x .7 or 4 bar plus .7. Mantissa on converting it to base e by multiplying .7 with 2.303, becomes 1.6121. Here 1.6121 is too large a quantity to handle. To simplify it, mantissa is taken as .07 instead of .7 and exponent becomes 3 bar. Mantissa on converting it to base e by multiplying .97 with 2.393 becomes .16121. On putting this value in equation (3), we get

n = 1 -+ .16121 + (.16121)^2/2! + (.16121)^3/2……

= 1.16121 + .02599/2 + .00419/6…

= 1.16121 + .01299 + .00070 = 1.1749.

Actual result from antilog table = 1.175.

Error is .001 percent.

Antilog of .00007 is 10^-3 (1.1749).

*Example 3
*

Let logarithm of a quantity is 4.950.

Exponent is 4 and mantissa is .950.

With the purpose of reducing error, we will reduce mantissa and increase exponent correspondingly. Let us make mantissa as .0950 and exponent will correspondingly increase to 5.

On converting it to base as natural number, mantissa becomes equal to 2.303 x .0950 = .2188.

On putting it in equation (3), we get

n = 1+ .2188 + .0479/2 + .0105/6

= 1+ .2188 +.0240 + .0017 = 1.2445.

Actual from antilog table = 1.2450.

Error is -.04 percent.

Antilog of 4.95 is 10^5. (1.2445).

In this way, antilog of more quantity can be found out. I have not used equation (5) for determining antilog as this method is cumbersome.

*Conclusions*

Examples given above prove that antilogarithm of any quantity can be determined by suitably transforming it so that mantissa may have value as close to zero as possible.Had care not taken, the results may be highly erratic.Also the antilogarithmic value as determined by these methods will always be approximate as higher power terms of the series are ignored.

Mathematics is learnt by actually working out solution on notebook, it can never be understood by novel reading. Solution of a mathematical problem can be consulted as far as method adopted is concerned. After learning the method, problem needs to be solved independently. Now you can pick up your pen and copy book and start determining antilogarithm of quantities that strike your mind randomly. Also check accuracy of your work from antilogarithm table.Once you have solved some questions, it will build your confidence. You may then attempt preparing your own antilog table without using calculator or any other device.

Note: 1) Image courtesy Author Peter John Acklam at https://en.m.wikipedia.org/wiki/File:Exp.svg 

2) Title Image of a nautilus displaying a logarithmic courtesy spiralhttps://en.m.wikipedia.org/wiki/Logarithm#/media/File%3ANautilusCutawayLogarithmicSpiral.jpg

3) Antilog is abbreviation of antilogarithm and log is abbreviation of logarithm.

End

*Writer is an Electronics and Electrical Communication Engineering graduate and was earlier Scientist, then Instrument Maintenance Engineer, then Civil Servant in Indian Administrative Service (IAS). After retirement, he writes on subjects, Astronomy, Mathematics, Yoga, Humanity etc.
*

Q8 If infinity is not really a number, then is ‘infinity minus (infinity minus one)’ still one?

Q9 If I am what I eat, how come French loaves have no effect?

]]>Solution:

- Draw a line segment BC = 7 cm
- Make an ∠B = 75° i.e., ∠CBX = 75°
- Cut a line segment BX at D such that BD = AB + AC = 13 cm
- Join DC
- Make an ∠DCY = ∠BDC
- Let CY intersect BD at A .
- Then the required triangle is ABC

**Construct a triangle ABC in which BC = 8cm, ∠B = 45° and AB – AC = 3.5 cm.**

Solution:

- Draw a line segment BC = 8c
- Make an angle ∠B = 45° i.e., ∠CBX = 45°
- Cut the line segment BD = 3.5 cm from the ray BX
- Join DC
- Draw perpendicular bisector MN of DC and intersect it to BX at A
- Join AC
- Then ABC is the required triangle.

**Construct a triangle PQR in which QR = 6cm, ∠Q = 60° and PR – PQ = 2cm.**

Solution:

- Draw a line segment QR = 6cm
- Make an angle ∠Q = 60° i.e., ∠RQX = 60°
- Cut the line segment QX such that, QS = 2cm = PR – PQ which is extended in opposite side
- Join SR . Draw perpendicular bisector of MN of SR which intersects QZ at P
- Join RP then PQR is the required triangle.

**Construct a triangle XYZ in which ∠Y = 30°, ∠Z = 90° and XY + YZ + ZX = 11 cm.**

Solution:

- Let AB = XY + ZX + XY = 11 cm
- Then ∠BAP = 30° and ∠ABR = 90°
- Bisect ∠LBC and ∠MCB . Let these bisector meet at X
- Draw perpendicular bisectors DE and FG to XB and XC respectively.
- Let DE intersect BC and FC intersect BC at Y and Z respectively.
- Join XY and XZ . Then XYZ is the required triangle.

**Construct a right triangle whose base is 12cm and sum of its hypotenuse and other side is 18 cm.**

Solution:

- Let the base of an right angled triangle is BC = 12 cm
- At B make an angle XBC = 90°
- Cut the line segment BX at D i.e., at 18cm = AB+AC
- Draw a perpendicular bisector MN to DC which cut the line segment BX at A
- Join AC
- Therefore triangle ABC is the required right angled triangle.

]]>

Submit NB1 and graph NB tomorrow without fail.

Bring NB2 on 23.06.17. ]]>

Write note along with example on divisibility of 11. Complete Q 4 Of Ex. 3.3 in NB1 with proper statements.

]]>**1.Assuming small differences are meaningful**

Examples of this include small fluctuations in the stock market, or differences in polls where one party is ahead by one point or two. These represent chance rather than anything meaningful.

To avoid drawing any false conclusions that may arise due to this statistical noise we must consider the *margin of error* related to the numbers. If the difference is smaller than the margin of error, there is likely no meaningful difference and is probably due to random fluctuations.

**2. Equating statistical significance to real-world significance**

Statistical data may not represent real-world generalisations, for example stereotypically women are more nurturing while men are physically stronger. However, given a pile of data, if you were to pick two men at random there is likely to be quite a lot of difference in their physical strength; if you pick one man and one women they may end up being very similar in terms of nurturing or the man may be more nurturing than the woman.

This error can be avoided by analysing the *effect size* of the differences between groups, which is a measure of how the average of one group differs from the average of another. Then if the effect size is small, the two groups are very similar. Even if the effect size is large, each group will still have a lot of variation so not all members of one group will be different from all members of the other (hence giving rise to the error described above).

**3. Neglecting to look at the extremes**

This is relevant when looking at *normal distributions*.

In these cases, when there is a small change in performance for the group, whilst there is no effect on the average person the character of the extremes changes more drastically. To avoid this, we have to reflect on whether we’re dealing with the extreme cases or not. If we are, these small differences can radically affect the data.

**4. Trusting coincidence**

If we look hard enough, we can find patterns and correlations between the strangest things, which may be merely due to coincidence. So, when analysing data we have to ask ourselves how reliable the observed association is. Is it a one-off? Can future associations be predicted? If it has only been seen once, then it is probably only due to chance.

**5. Getting causation backwards**

When we find a correlation between two things, for example unemployment and mental health, it may be tempting to see a causal path in one direction: mental health problems lead to unemployment. However, sometimes the causal path goes in the other direction: unemployment leads to mental health problems.

To get the direction of the causal path correct, think about reverse causality when you see an association. Could it go in the other direction? Could it even go in both ways (called a *feedback loop*)?

**6. Forgetting outside cases**

Failure to consider a third factor that may create an association between two things may lead to an incorrect conclusion. For example, there may be an association between eating at restaurants and high cardiovascular strength. However, this may be due to the fact that those who can afford to eat at restaurants regularly are in high socioeconomic bracket, which in turn means they can also afford better health care.

Therefore, it is crucial to think about possible third factors when you observe a correlation.

**7. Deceptive Graphs**

A lot of deception can arise from the way that the axis are labeled (specifically the vertical axis) on graphs. The labels should show a meaningful range for the data given. For example, by choosing a narrower range a small difference looks more impactful (and vice versa).

In fact, check out this blog filled with bad graphs.

M x

]]>

Complete Q.2 and Q.3 of Exercise-3.3 in NB1.

Regards

MT

]]>Solution:

Construction:

- Draw a line ray AB with initial point A
- Taking A as a centre draw an arc XY with any radius intersecting the ray AB at P.
- Draw another arc with same radius by taking C as centre which intersects the arc XY at Q. Draw a ray joining AQ.
- Taking Q as centre with same radius draw another arc which intersects the arc XY at R. Draw a ray joining AR.
- Now, taking Q and R as centres with any radius draw two arc intersecting one another at S.
- Join AS. Then angle SAB = 90˚

Justification:

By construction, we have AP = AQ = PQ, therefore, triangle APQ is an equilateral triangle. Thus, ∠APQ = ∠AQP = ∠QAP = 60˚

Also, we have, AQ = AR =RQ , therefore, triangle ARQ is also an equilateral triangle. Thus, ∠ARQ = ∠AQR = ∠QAR = 60˚

Since, AS bisects ∠QAR then we have ∠QAS = ∠SAR = ^{1}/_{2}∠QAR = ^{1}/_{2} x 60˚ = 30˚

Thus, ∠PAS = ∠PAQ + ∠QAS = 60˚ + 30˚ = 90˚

**Construct an angle of 45˚ at the initial point of a given ray and justify the construction.**

Solution:

Construction:

- Draw a line ray AB with initial point A
- Taking A as a centre draw an arc XY with any radius intersecting the ray AB at P.
- Draw another arc with same radius by taking C as centre which intersects the arc XY at Q. Draw a ray joining AQ.
- Taking Q as centre with same radius draw another arc which intersects the arc XY at R. Draw a ray joining AR.
- Now, taking Q and R as centres with any radius draw two arc intersecting one another at S.
- Join AS. Then angle SAB = 90˚
- Again taking S and P as centres with any radius draw two arcs intersecting each other at L.
- Join AL . Therefore, ∠PAL = 45 ˚

**Justification:**

Join GH and CH

In ∆AHG and ∆AHC, we have [arcs of equal radii]

HG = HC [radii of the same arc]

AG = AC [common]

⸫∆AHG ≅ ∆AHC [SSS congruence]

⇒∠HAG = ∠HAC [CPCT] ……………(i)

But ∠HAG + ∠HAC = 90˚ [By construction] ……………….(ii)

⇒∠HAG = ∠HAC = 45˚ [from (i) and (ii)]

**Construct the angles of the following measurements:**

**(i) 30° (ii) 22 ^{1}/_{2}° (iii) 15°**

Solution:

(i) 30° = ^{60}^{ ˚}/_{2}

Construction:

- Draw a ray AB with initial point A
- Draw an arc XY with any radius by taking A as centre which cuts the ray AB at C.
- Draw an arc by taking C as centre with same radius which intersects arc XY at D. Since AC = AD = DC then, ∆ADC is an equilateral triangle.
- ∠ADC = ∠ACD = ∠DAC = 60°
- Taking C and D as centres with any radius draw two arcs intersecting one another at E. Join AE.
- Thus,
^{1}/_{2}∠EAC = ∠DAC ;^{1}/_{2}∠EAC = 60˚ ⇒∠EAC = 30˚

(ii)22^{1}/_{2}˚ = ^{90˚}/_{2} = 45˚

Construction:

- Draw a line ray AB with initial point A
- Taking A as a centre draw an arc XY with any radius intersecting the ray AB at P.
- Draw another arc with same radius by taking C as centre which intersects the arc XY at Q. Draw a ray joining AQ.
- Taking Q as centre with same radius draw another arc which intersects the arc XY at R. Draw a ray joining AR.
- Now, taking Q and R as centres with any radius draw two arc intersecting one another at S.
- Join AS. Then angle SAB = 90˚
- Taking S and P as centres draw two arcs intersecting each other at T. Join AT. i.e., ∠TAP =
^{1}/_{2 }∠SAP

∠TAP = ^{1}/_{2} x 90 ˚ = 45 ˚ = 22^{1}/_{2}˚

(iii) 15°

Construction:

- Draw a ray AB with initial point A
- Draw an arc XY with any radius by taking A as centre which cuts the ray AB at C.
- Draw an arc by taking C as centre with same radius which intersects arc XY at D. Since AC = AD = DC then, ∆ADC is an equilateral triangle.
- ∠ADC = ∠ACD = ∠DAC = 60°
- Taking C and D as centres with any radius draw two arcs intersecting one another at E. Join AE.
- Thus,
^{1}/_{2}∠EAC = ∠DAC ;^{1}/_{2}∠EAC = 60˚ ⇒∠EAC = 30˚ - Taking E and C as centres draw two arcs with any radius intersecting each other at F. Join AF
- Thus,
^{1}/_{2}∠FAC = ∠EAC ;^{1}/_{2}∠FAC = 30˚ ⇒∠FAC = 15˚

**Construct the following angles and verify by measuring them by a protractor:**

**(i) 75° (ii) 105° (iii) 135°**

Solution:

(i) 75°

Construction:

- Draw a line ray AB with initial point A
- Taking A as a centre draw an arc XY with any radius intersecting the ray AB at P.
- Since AQ = AP = PQ, ∆APQ is an equilateral triangle. Therefore, ∠PAQ = 60 ˚
- Now, taking Q and R as centres with any radius draw two arc intersecting one another at S.
- Join AS. Then angle SAB = 90˚
- ∠SAB = ∠PAQ + ∠QAS ⇒90 ˚ = 60 ˚+ ∠QAS ⇒∠QAS = 30˚
- Taking S and Q as centres draw two arcs intersecting each other at M.
- Join Then ∠MAQ =
^{1}/_{2}∠QAS =^{1}/_{2}x 30˚ = 15˚ - Therefore, ∠PAM = ∠MAQ + ∠PAQ = 60˚ + 15˚ = 75˚

(ii) 105°

Construction:

- Draw a line ray AB with initial point A
- Taking A as a centre draw an arc XY with any radius intersecting the ray AB at P.
- Since AQ = AP = PQ, ∆APQ is an equilateral triangle. Therefore, ∠PAQ = 60 ˚
- Now, taking Q and R as centres with any radius draw two arc intersecting one another at S.
- Join AS. Then angle SAB = 90˚
- Taking S and R as centre with any radius draw two arcs intersecting each other at T.
- Join AT. Then, ∠SAT =
^{1}/_{2}∠SAR =^{1}/_{2}x 30 ˚ = 15 ˚ - Thus, ∠PAT = ∠PAS + ∠SAT = 90 ˚ + 15 ˚ = 105 ˚

(iii) 135°

Construction:

- Draw a line ray AB with a point O.
- Taking O as a centre draw an arc XY with any radius intersecting the ray AB at C.
- Draw another arc with same radius by taking C as centre which intersects the arc XY at D.
- Taking D as centre with same radius draw another arc which intersects the arc XY at E. Draw a ray joining OF.
- Now, taking D and E as centres with any radius draw two arc intersecting one another at F.
- Join OF. Then angle ∟FOC = 90˚
- Taking O as radius draw an arc interesting the ray AB at G. ∟FOG = 90˚
- Now, taking G and F as centres with any radius draw two arc intersecting one another at H.
- Join OH. ∟HOF =
^{1}/_{2}∟FOG =^{1}/_{2 }x 90˚ = 45˚ - ∠HOC = ∠FOC + ∠HOF = 90˚ + 45˚ = 135˚

**Construct an equilateral triangle, given its side and justify the construction.**

Solution:

Steps of Construction:

- Draw a line segment AB of the given length
- Taking A and B as centres with radius same as that of the length of AB draw two arcs intersecting each other at C
- Join AC and BC.
- Then triangle ABC is an equilateral triangle.

Justification:

- By construction AB= BC = AC
- Then triangle ABC is an equilateral triangle.

Abstract:

“We consider time-dependent solutions of the Einstein-Maxwell equations using anti–de Sitter (AdS) boundary conditions, and provide the first counterexample to the weak cosmic censorship conjecture in four spacetime dimensions. Our counterexample is entirely formulated in the Poincaré patch of AdS. We claim that our results have important consequences for quantum gravity, most notably to the weak gravity conjecture”

]]>Time is a variable. Each plant and each animal has its own time line. My space is located within the space allocated to humans.

I am a transient passing through worlds parallel, overlapping, and superimposed but to me there is only one lifetime I can live. I tried to explain all that to Albert but he was having trouble understanding all of the concepts. Finally I said, “Albert, time is relative.”

We took a train trip and I explained the difference between riding in the train and watching the train go by. It took a while but gradually he began to understand. I think he might be able to explain several theories if he pays attention but he is still fuzzy about the speed of light and how light can be bent and go even faster. I’ll explain it again next week. I’d hate for him to give up when he’s this close. On the other hand, he could learn to be a poet and become famous. No one ever gets famous learning obscure mathematical theories. And maybe Albert could be a politician. No one ever knows what they’re talking about. (or cares)

June 21, 2017

]]>That we cannot doubt of our existence while we doubt, and that this is the first knowledge we acquire when we philosophize in order. (Rene Descartes)

Accordingly, the knowledge, I think, therefore I am, is the first and most certain that occurs to one who philosophizes orderly. (Rene Descartes)

Before examining these inference the famous French mathematician, scientist, thinker Rene Descartes who lived in the 17th century, had done, let us talk about a few aspects.

We think in every second. It’s almost impossible to stop thinking. Is there any idea coming out of nowhere among these many thoughts? Let’s open a little more. Do not all the ideas we think actually originate through an association / stimulation / observation / experiment etc? For example, Newton’s idea of gravity is based on a deduction that depends on the fall of an apple. Einstein’s relativity turned out to be the result of his observations that he had done. It can be said in the simplest sense that the idea actually takes place as conclusion of thoughts which are already processed in our minds. That is to say, an idea can not be created by human beings. If you think carefully, you can see that all ideas arrive at a different point by deduction from a starting point.

This creates a question for us: If all ideas are connected from one point to another, can an idea be produced from absolute nothing? Descartes, too, with or without awareness, would have thought about this question, “.. first and most certain that occurs to one who philosophizes orderly”. So he chose a starting point. He tried to obtain ‘correct’ information through inferences from a point which he assumed is true. There is an important remark we should consider at this stage. If the information I get as a starting point is wrong, we can not talk about the truth for our inferences (stating as wrong, and as not true are two different observations. One has certainty while the other has not!). We will not stay on this for now.

If we turn back to our question, no living or inanimate entity can create ideas. The reason is simple. All ideas are actually based on a starting point, as we have mentioned above, like the starting point of Descartes.

Now let’s talk about certainty. If all ideas are based on a starting point, and all we can examine do is deducing interpretations from an initial point, then we can never be 100 percent sure of the correctness of our base points. Because all of the inferences we make are based on the assumption of the correctness of the starting point, and we need another starting point to prove this is right, for which is not precise by cause of the same reason.

In order to understand the truth of the statement, it is necessary to understand first what thinking is. Do we really think, for instance? Can we be aware of that? When we play a computer game and press some buttons, characters in the game play as we want. Are they aware of that? Can’t it be possible like a simulation game that some ‘creatures’ press a few buttons and for that we presume we think? Can a creature who does not know what will he think after 5 seconds be sure that thinking acts under his own will? How much of this can we take on ourselves if we are programmed to ‘think’? Is it possible to find a right answer of what thinking is when we can not be 100 percent sure that we are not programmed to think? Or is it not a paradox trying to be able to answer what thinking is by thinking? Long story short, the questions of what thinking is and do we really think, especially after the emergence of artificial intelligence, has begun to be described as vague questions. Therefore, we can not be assured of what it is to think in real and propose that doubting implies thinking since we don’t really know what thinking is.

The other issue is, do we really doubt? With the same logic, we can say that we may be programmed to be suspicious. And again, as we have mentioned above, how much of the act of doubt do we possess? We can not be sure of doubting action while the answers of the questions what and how we doubt are pending. So, we are talking about an initial point that we can not prove it is correct (not to be sure it is correct does not imply that we claim the expression is wrong!). In this case, we can not say that the deduction is right.

As we have noted above, we can not be certain of the correctness of this inference because we can not be certain of the act of thinking. (Uncertainty of the accuracy of the base point)

There is also another issue, a paradox. What is being exists? Do we exists? Does really our universe exists? What does existence mean? Does reflection in a mirror exists? Is this illusion which is not occupied in our universe a being?

If we do not consider the reflection as a being, should our universe is a reflection, we may claim that our universe may not exists. More precisely, we may claim that it might be an illusion.

Mathematics, like other sciences, is based on inferences. This is why mathematics needs the starting point (s) too. For example, most commonly used Euclidean spaces in Geometry is based on 4 axioms [1]. These 4 axioms can not be proved and their validity has to be accepted. All other information is confirmed with the acceptance of these four axioms.

On the other hand, the number 0 in mathematics was unknown for centuries. In fact, the existence of numbers other than rational numbers was unknown. If we could go back to those time zones and ask the people of those times, they would claim without hesitation that their number system is complete. Their number system was far to be complete though. After the irrational numbers have appeared, it is understood that the rational numbers on the number system are so few that the probability of picking a random rational number from the number system is zero percent! That means, rational numbers are almost absent on the number system! Those almost absent numbers have been accepted as the whole number system for centuries… Time has shown that we may be mistaken even in mathematics. So, we can be mistaken in any aspects of science.

In conclusion inferences we now believe in correct can be refuted centuries later. Thus, we can not be sure of anything and we can not talk certainty of any claim including this one!

———————————————————————————————-

[1] https://en.wikipedia.org/wiki/Euclidean_geometry#Axioms

]]>There’s a persistent myth that you may have seen before in the guise of news. It claims to cite a study from the World Health Organization (WHO) showing that naturally blond hair will be a thing of the past within a few hundred years.

When we dig into the history of this story—or, more accurately, when we read the results of Snopes digging into it—we find that the same story with slight variations has been making the rounds for more than a hundred and fifty years! People have been claiming the imminent extinction of blonds since at least the American Civil War.

Why does this seem believable enough to keep repeating it over and over? Part of the reason is probably that many of us remember only a few things from our assorted biology classes, and that likely includes the concept of dominant and recessive traits.

A simple but distorted way of thinking about it is that hair color is controlled by two copies of a gene, one from each of our parents. Many genes have alleles, or multiple versions resulting in different hair colors, and these alleles can be dominant or recessive. A dominant allele is expressed if it shows up in one or both copies, but you have to have a recessive allele on both sides for it to be expressed. So if a hypothetical mom has blond hair but the hypothetical dad has brown hair, there’s another genetic dead end for blonds, right?

Fortunately for genetic diversity, this isn’t really the way it works. We’ll keep it simple and consider just brown and blond hair (an obvious simplification, since redheads aren’t dying out either). We’ll also ignore the facts that many traits aren’t determined by a single gene and that dominance isn’t always complete enough to exclude the recessive trait.

Take my example, with a blonde mother and a dark-haired father. Those are their phenotypes, or the way their genes are expressed in their physical development. Blondness is recessive, so we know the mother is homozygous, with two alleles of the same type. Let’s also say the father got dark-haired alleles from both his parents, so he’s also homozygous. Here are our genotypes:

Based on their genotypes, we can construct a Punnett square with the father’s genotype on the horizontal axis and the mother’s on the vertical axis. The resulting grid shows the possible combinations of the single alleles they’ll give to their children.

This one’s kind of boring, since each of them can only give one type of allele. All their kids get one dark and one blond allele, so they have the dominant phenotype of dark hair. Note, however, that their blond alleles haven’t gone anywhere! They’re hanging out, just waiting for their chance to hop into the gene pool.

At this point, we can talk about what’s called Hardy-Weinberg equilibrium, with a few tweaks to our situation. One is that genetic statistics can’t be done at the level of one couple, or even a couple couples. To have numbers large enough to work for population statistics, our hypothetical parents would have to shatter the world record for the number of offspring born to one couple (69 kids, courtesy of the Vassilyevs of 18^{th}-century Russia, if you’re curious). We’ll just take their heterozygous kids and make them into a population of, say, several thousand.

The other change is that half dominant and half recessive alleles is a *boring* setup, so we’re going to make it a 60-40 split.

…yes, it’s pretty much an entirely different scenario. I just wanted us to feel like we had built a connection with our imaginary breeding population, OK?

Anyway, let’s look at the population. Let’s say that out of all the alleles floating around, a certain percent code for dark hair (**A**) and a certain percent code for fair hair (**a**). We can label those as **p** and **q**, expressed as fractions of 1, representing all the alleles in the population. So we have

so that if 40% of our hair color alleles are dominant (brown) and 60% are recessive (blond), we can say

These fractions also give us the chance that any randomly selected allele is either **A** (**p**) or **a** (**q**). For our simple scenario, **p** and **q** are our only allele fractions, so we can say

When we set up a Punnett square, we can look not only at the combinations of **A** and **a**, but also at the statistical chance of each combination. We get these by multiplying the fractions for the two alleles that combine in each block, so that an **AA** combination has a **(0.4) x (0.4) = 0.16** or **16%** chance of occurring.

For Hardy-Weinberg conditions to apply, we have to assume that hair color alleles aren’t significantly sex-linked, so that **p** and **q** are the same and **A** and **a** are distributed equally for men and women (another reason not to go with our original homozygous parents as a basis). That means that our Punnett square has the same values on the vertical and horizontal axes, so it comes out like this:

This is a setup that looks very similar to binomial expansion, which I covered a while back. I’ve illustrated our genetic scheme with the same kind of square I drew then to show the terms of the binomial expansion, which shows us that in this case we get those same binomial terms for the fraction of offspring that has each possible genotype:

or, more specifically,

To see how this actually works out for brown and blond hair, remember that the middle term is heterozygous (one of each allele) so the brown will dominate (again, a simplification compared to how it works in some cases). Here’s how the phenotypes add up:

With this first generation of offspring, we come to the reason this is called the Hardy-Weinberg equilibrium—this distribution of allele combinations will be the same for every generation from this point forward, regardless of how the first generation started out. The 60-40 split we used could have come from an initial population that was 60% homozygous blond and 40% homozygous dark-haired, but from now on (statistically speaking) it’s going to be 64% dark-haired and 36% light-haired.

Note that dark hair is more prevalent, even though more than half of the alleles are for blond hair! I chose the numbers for this scenario specifically to illustrate that dominant traits don’t necessarily have more alleles in the population. They’re not always the majority, either—if **q** for blond hair had been higher (say, **0.75**), **q ^{2} **would have been high enough for blonds to be a majority:

It seems that blonds are probably safe, considering the size of our human population. Of course, our conclusion depends on a whole list of assumptions, including the assumption that there’s nothing in particular keeping blonds from passing on their genes. If significantly fewer blonds were having kids, then yes, the number of **a** alleles in the mix would drop significantly. As far as I know, though, there’s no reason to suspect that’s a danger.

Now that your worst fears are allayed, read up on the details of the Hardy-Weinberg principle and decide for yourself how well it actually applies to the human population! Real genetic inheritance for many human traits is much more complicated than **A** or **a**, and it’s amazing to see how our diversity comes about. Whether your eyes are brown, blue, hazel, or an unsettling inferno of all-seeing fire, enjoy your reading!

- Focus,
- Coherence,
- Rigor

are only a beginning. As I reflect on these, I realize my degree in mathematics gives me a huge advantage over most teachers, because it gave me a perspective that I am not sure is common, that is, Mathematics is alive and a language that connects all; quite simply Mathematics is the queen of the sciences as said by the giants that came before all of us (Newton I believe, but I could be mistaken).

The changes I am seeing in my teaching that affect student learning, and it is not really a change, but I focus and reflect on it more frequently, is CONNECTIONS! I find myself, and I hear and see my kids later in the year, connecting prior learning to new learning. I have always done this, but now I try to give my kids an assessment blueprint and I find myself writing things on paper for me that I need to connect and bring out. Yet I also have strengthened assessment and challenged my students to perform.

The standards set by our state’s department of education are the standards set. The goal post is set, but I scaffold instruction and and assessment to give my students’ entry points and hope. In many classes, I allow my students to learn from their mistakes and raise their grades. Initially I had them correct the problems only, now they do this and in a few sentences tell me why they made the mistake and how they will fix this going forward.

The other area I am experiencing huge change is planning. I am still struggling here, but I have tried to keep a calendar on Google, we use Google Classroom–but I am pondering changing to MS Classroom as it supports math learning far more easily, but I was finding I needed more time….

I also find myself needing to delve into educational research to continue growing and learning. more later.. Stay cool out there, here it is 118 degrees. Steve

]]>

**Using technology to support mathematics education and research**

Christian received his PhD in 2011 at Utrecht University and is lecturer at the University of Southampton. In this talk Christian will present a wide spectrum of research initiatives that all involve the use of technology to support mathematics education itself and research into mathematics education. It will cover (i) design principles for algebra software, with an emphasis on automated feedback, (ii) the evolution from fragmented technology to coherent digital books, (iii) the use of technology to measure and develop Mental Rotation Skills, and (iv) the use of computer science techniques to study the development of mathematics education policy.

The talk referenced several articles Dr. Bokhove has authored over the years, for example:

- Bokhove, C., & Drijvers, P. (2012). Effects of a digital intervention on the development of algebraic expertise.
*Computers & Education*,*58*(1), 197-208. doi:10.1016/j.compedu.2011.08.010 - Bokhove, C., (in press). Using technology for digital maths textbooks: More than the sum of the parts.
*International Journal for Technology in Mathematics Education*. - Bokhove, C., & Redhead, E. (2017). Training mental rotation skills to improve spatial ability.
*Online proceedings of the BSRLM*,*36*(3) - Bokhove, C. (2016). Exploring classroom interaction with dynamic social network analysis.
*International Journal of Research & Method in Education*, doi:10.1080/1743727X.2016.1192116 *Bokhove, C.*, &Drijvers, P. (2010). Digital tools for algebra education: criteria and evaluation.*International Journal of Computers for Mathematical Learning, 15*(1)*,*45-62*.*Online first. doi:10.1007/s10758-010-9162-x

FUNDAMENTAL OF INFORMATION TECHNOLOGY

Recent advances in telescope and sensor technology have finally allowed us to start answering one of the great unknowns of the 21st century – how many habitable worlds are there in our Milky Way galaxy?

Until recently it was simply assumed that a fairly high fraction of stars ‘probably’ contained planets, and that some further fraction of those would be in the so-called habitable zone. These assumptions were based solely on knowledge extrapolated from our own solar system. But as has been painfuly demonstrated with the hunt for life in our solar system, assumptions like this need careful confirmation, and extrapolating from a sample size of one is seldom convincing. Often, what we want to believe is, regrettably, not compatible with reality. *The video link from the late Carl Sagan at the end of the piece will give you a flavour for what people ‘guessed’ in the 1980s.*

The good news is we now have concrete evidence that an abundance of other planets are out there orbiting other suns – in fact roughly 70% of all stars are accompanied by other worlds. How do we know this? There are two simple detection methods which I’d like to explain in more detail, which both ‘indirectly’ detect the presence of other planets. These are:

- Transit Photometry
- Radial velocity

The first method involves closely observing a star over a period of time with a photometer, looking for any subtle changes in its brightness. Any dip in brightness which is periodic and cannot be explained by general stellar dynamics is then assumed to be a large body passing between us and the star.

Incredibly, even amateurs with 12 inch backyard telescopes have been able to detect these illumination changes for very large planets, producing crude light curve plots that have later been verified by professional astronomers.

This method of detection works out to distances of several thousand light years, and allows an approximate calculation of the planet’s size, or radius. Additionally, for certain nearby stars (if clear spectra can be obtained), drops in light from specific elements can be detected, telling us about the possible composition of the planet’s atmosphere. For example, if the planet contained a Nitrogen rich atmosphere, we might detect a dip in the intensity of the spectrum corresponding to the wavelength of Nitrogen – this process is called ‘absorption spectrometry’.

The main disadvantage of the transit method is that it relies on a nearly perfect edge-on view of the planet-sun ecliptic from our earth bound position. Otherwise we simply would not detect any occultation. Thankfully, we can calculate roughly how often this orientation occurs and account for it statistically. There’s no shortage of candidate stars out there with systems well aligned for detection.

The second method is similar, but instead focuses on the relative motion of the star. Despite the huge difference in mass between a star and its satellites, as a planet orbits it will impart enough of a gravitational tug to make the star rotate about a small local axis – tiny but detectable. The larger the planet the bigger this wobble will be.

To detect this regular movement we can look at the light emanating from the star over time and try and detect if its spectra is being shifted by tiny amounts. By the doppler effect, if the wavelengths of the star’s spectra appear shifted towards the red it must be moving away from us, and the opposite if the spectra is shifted towards the blue. In this way the sun’s speed and local orbital radius can be calculated, and from that a determination made of the mass of the orbiting planet.

As a quick example, the Sun moves by about 13 m/s due to the influence of Jupiter, but only about 12 cm/s due to Earth. Incredibly, velocity variations down to 1 m/s or even less can be detected with modern spectrometers, such as the HARPS. The major limitation of this method is distance. At the moment it’s generally only useful for star systems up to 200 light years away.

Bringing both methods together, however, lets us form a picture of an exoplanet’s size and mass, and therefore it’s overall density. From that, inferences can even be made about the internal structure of the planet. All this information without even observing the planet!

So what do these methods tell us, so far, about the likely number of habitable or earth like planets in our Milky Way galaxy? The answer is absolutely staggering.

Based on Kepler mission data, as many as **40 billion** earth like planets in the habitable zone could be orbiting around red dwarf and sun-like stars. Taking away the red dwarfs leaves an upper calculation of **11 billion** around Sun like stars. Just think about those numbers for a moment.

That’s as many as one earth-like planet in the habitable zone for every 10 stars in our local galaxy! A stupendously high number of candidate worlds from which life may have originated.

But here again, we must be cautious. 11 billion is 11×10^9. But what if the probability of life forming on rocky planets within the habitable zone was actually as low as low as 11 x 10^-9, or even 11 x 10^-99? Then there might only be one or no candidate planets containing life. A depressing possibility, but one we should never allow our natural bias to discount. The lesson here is that any large number can quickly be diminished in stature by an equally small probability.

At the moment we simply don’t know what this probability of emergent life is. Some biologists are more optimistic and consider it relatively high for simple single cell life, but other figures, for more complex multi cellular organisms, are much more pessimistic. But when we do know this figure, calculating the number of planets on which life has arisen will be comparatively simple, and Frank Drake’s famous equation for estimating the number of ‘technical civilisations’ will be one step closer to a final solution.

*How did we view the question of ‘other planets’ in the context of life outside Earth in the 1980s. Watch the late Carl Sagan to find out.*

The field of mathematics today represents an ongoing global effort, spanning both countries and centuries. While some developments emerged in multiple cultures, independent of each other, others involved an extensive exchange of ideas among individuals around the world. Through this in-depth narrative, students will learn how major mathematical concepts were first derived, as well as how they evolved with the advent of later thinkers shedding new light on various applications. Everything from…

Link http://sharpbook.net/books/the-britannica-guide-to-the-history-of-mathematics

]]>