I used *Stella 4d* to make this. You may try this program for free at http://wwwsoftware3d.com/Stella.php.

Any Grothendieck topology on * χ* we are discussing in the following has at least one sheaf for it. Therefore, we can assume any sieve

Let * J* be a Grothendieck topology on

**J{U _{K}} = {↓ U_{K}}**

for **K = 0, 1, 2, or 3**

Discussing about **J(U _{∞}) **

for * k = 0, 1, 2, 3, ∞*, define sieves

Followings are all possible sieves on * U_{∞}*.

**I _{12} = I_{1} ∪ I_{2}, I_{13} = I_{1} ∪ I_{3}, I_{23} = I_{2} ∪ I_{3}, I_{123} = I_{1} ∪ I_{2} ∪ I_{3}**

Now, we define two Grothendieck topologies * J_{0}* and

* J_{0}* is defined as

* J_{1}* is defined as

We can easily show that any Grothendieck topology on * χ *that has at least one sheaf on

The diagram shows the unique extension from * I_{123}* to

So, we have a necessary and sufficient condition for a monetary value measure to be a * J_{1-}sheaf*.

* 1.0 Ψ* becomes a sheaf for

**g _{1} (a – c’) + c’ = g_{2} (b – a’) + a’= g_{3} (c – b’) + b’**

**⇒ (c’ = f _{1} (b – c) + c) ∧ (a’ = (f_{2} (c – a) + a) ∧ (b’ = f_{3} (a – b) + b)**

Entropic value measurement: Let * P* be a probability measure on

**Ψ ^{V}_{U}(X) := 1/λ log E^{P} [e^{λX} | V]**

Then from

**Ψ ^{1}_{∞} (a, b, c) = 1/λ log E^{P }[(e^{λa}, e^{λb}, e^{λc}) | U_{1}]**

**=****(a, 1/λ log (p _{2}e^{λb} + p_{3}e^{λc})/(p_{2} + p_{3}), 1/λ 1/λ log (p_{2}e^{λb} + p_{3}e^{λc})/(p_{2} + p_{3}))**

the corresponding six functions from part 3 now are,

**f _{1}(x) = 1/λ log (p_{2}e^{λx} + p_{3})/(p_{2} + p_{3})**

**f _{2}(x) = 1/λ log (p_{3}e^{λx} + p_{1})/(p_{3} + p_{1})**

**f _{3}(x) = 1/λ log (p_{1}e^{λx} + p_{2})/(p_{1} +p_{2})**

**g _{1}(x) = 1/λ log (p_{1}e^{λx} + p_{2} + p_{3})**

**g _{2}(x) = 1/λ log (p_{1} + p_{2}e^{λx} + p_{3})**

**g _{3}(x) = 1/λ log (p_{1} + p_{2} + p_{3}e^{λx})**

So, the question is if the entropic value measure is a * J_{1-}sheaf*. The necessary and sufficient condition becomes like

**p _{1}e^{λa} + (1 – p_{1})e^{λc’} = p_{2}e^{λb} + (1 – p_{2})e^{λa’} = p_{3}e^{λc} + (1 – p_{3})e^{λb’} := Z**

**⇒ Z = p _{1}e^{λa} + p_{2}e^{λb} + p_{3}e^{λc}**

However, this does not hold true in general. On the corollary, any set of axioms on * Ω = {1, 2, 3}* that accepts concave monetary value measures is not complete.

According to the China University Ranking, this is one of the top 100 universities in China. Chemistry, Physics and Engineering are the three top 1% disciplines in the world according to the ESI (Essential Science Index) global ranking. The main campus is located at one of the four metropolitan areas in Zhejiang province. The university covers a total area of more than 200 hectares with well-equipped library, laboratories, gymnasium, entertainment center, hospital, supermarket etc.

Studies of Comparative Education

Studies of Pre-school Education

Studies of Higher Education

Applied Mathematics

Operational Research & Cybernetics

**Tuition Fees :** 22,800 RMB/ year

To be eligible, applicants must

1) Be a citizen of a country other than the People’s Republic of China, Nigeria, Iran, Iraq, Benin, Sierra Leone and Afghanistan, and be in good health;

2) Have an overall score of 3/5 or 60/100;

3) Aged between 18 and 35

Application Deadline: May 30th, 2017

Accommodation: 7200 RMB / year

Location: Zhejiang

Classes Starting: September

**PhD :** 34800 RMB (full tuition fee and 10000 RMB living allowance)

* OUR SERVICE CHARGE :* 1750 USD

Complete Q 8, 10 and 12 of NCERT textbook in NB1.

Submit your NB1 by tomorrow. Prepare for the activity on Rational Numbers on 20.01.17.

]]>Complete the remaining questions of Ex. 11.3 in NB1.

Solve the following questions in my diary by 20.01.16 and ask your doubts, if any, in class.

1. One row has 10 chairs. Find a rule which gives the total number of chairs for t rows.

2. Each purse contains 8 coins. Find a rule to give the total number of coins in b number of bags.

3. Write the following as a product:

(i) m+m+m+m+m+m+m+m (iii) t+t+t+t

(ii) g+g+g+g+g+g+g+g+g+g+g (iv) a+a+a+a+a+a+a

4. Write the algebraic expression for the following:

(i) the sum of x and 12 (ii) Subtract 10 from t

(iii) Subtract p from q (Iv) the product of m and 20

(v) b divided by 8 ]]>

Another variation of a rhodonite structure from the W. Nelson and D. Griffen database.

]]>Here is your today’s H.W

]]>.

Applying this matrix to any vector “doubles” the magnitude of the vector:

This is applicable to any vector except, of course, the zero vector, which cannot be scaled and is therefore excluded in our discussion in this post.

The interesting case, however, is when the matrix “scales” only a few special vectors. Consider for example, the matrix

.

Applying it to the vector

gives us

.

This is, of course, not an example of “scaling”. However, for the vector

we get

.

This is a scaling, since

.

The same holds true for the vector

from which we obtain

which is also a “scaling” by a factor of . Finally, this also holds true for scalar multiples of the two vectors we have enumerated. These vectors, the only “special” ones that are scaled by our linear transformation (represented by our matrix), are called the **eigenvectors** of the linear transformation, and the factors by which they are scaled are called the **eigenvalues** of the eigenvectors.

So far we have focused on **finite-dimensional** vector spaces, which give us a lot of convenience; for instance, we can express finite-dimensional vectors as column matrices. But there are also **infinite-dimensional** vector spaces; recall that the conditions for a set to be a vector space are that its elements can be added or subtracted, and scaled. An example of an infinite-dimensional vector space is the set of all continuous real-valued functions of the real numbers (with the real numbers serving as the field of scalars).

Given two continuous real-valued functions of the real numbers and , the functions and are also continuous real-valued functions of the real numbers, and the same is true for , for any real number . Thus we can see that the set of continuous real-valued functions of the real numbers form a vector space.

Matrices are not usually used to express linear transformations when it comes to infinite-dimensional vector spaces, but we still retain the concept of eigenvalues and eigenvectors. Note that a linear transformation is a function from a vector space to another (possibly itself) which satisfies the conditions and .

Since our vector spaces in the infinite-dimensional case may be composed of functions, we may think of linear transformations as “functions from functions to functions” that satisfy the conditions earlier stated.

Consider the “operation” of taking the derivative (see An Intuitive Introduction to Calculus). The rules of calculus concerning derivatives (which can be derived from the basic definition of the derivative) state that we must we have

and

where is a constant. This holds true for “higher-order” derivatives as well. This means that the “derivative operator” is an example of a linear transformation from an infinite-dimensional vector space to another (note that the functions that comprise our vector space must be “differentiable”, and that the derivatives of our functions must possess the same defining properties we required for our vector space).

We now show an example of eigenvalues and eigenvectors in the context of infinite-dimensional vector spaces. Let our linear transformation be

which stands for the “operation” of taking the second derivative with respect to . We state again some of the rules of calculus pertaining to the derivatives of trigonometric functions (once again, they can be derived from the basic definitions, which is a fruitful exercise, or they can be looked up in tables):

which means that

we can see now that the function is an eigenvector of the linear transformation , with eigenvalue equal to .

Eigenvalues and eigenvectors play many important roles in linear algebra (and its infinite-dimensional version, which is called **functional analysis**). We will mention here something we have left off of our discussion in Some Basics of Quantum Mechanics. In quantum mechanics, “observables”, like the position, momentum, or energy of a system, correspond to certain kinds of linear transformations whose eigenvalues are real numbers (note that our field of scalars in quantum mechanics is the field of complex numbers . These eigenvalues correspond to the only values that we can obtain after measurement; we cannot measure values that are not eigenvalues.

References:

Eigenvalues and Eigenvectors on Wikipedia

Linear Algebra Done Right by Sheldon Axler

Algebra by Michael Artin

Calculus by James Stewart

Introductory Functional Analysis with Applications by Erwin Kreyszig

Introduction to Quantum Mechanics by David J. Griffiths

]]>

Here is today’s HW

With Regards,

Preeti Lashkari

]]>Here is today’s HW

With Regards,

Charu Soni

]]>“For us, the cave paintings re-create the hunter’s way of life as a glimpse of history; we look through them into the past. But for the hunter, I suggest, they were a peep-hole into the future; he looked ahead. In either direction, the cave paintings act as a kind of telescope tube of imagination: they direct the mind from what is seen to what can be inferred or conjectured. Indeed, this is so in the very action of painting; for all its superb observation, the flat picture only means something to the eye because the mind fills it out with roundness and movement, a reality by inference, which is not actually seen but imagined.”

]]>Let be a group and a set. A group action of on is a function that satisfies the properties of

- identity, i.e , where is the identity element of ,
- and compatibility, i.e. .

For convenience, the shortands and are used in place of and , respectively.

Let denote the symmetric group of , that is the group of bijective functions from to . Consider the family of bijective functions , where and .

It is now possible to define the function as . In words, maps an element to the bijective function defined by .

The common representation of a group action as a homomorphism relies on the following equivalence; a function is a group action if and only if the function , with , is a group homomorphism. The main point of this blog post has been to formalize this statement. For the sake of completeness, its proof will be provided, albeit being simple.

Firstly assume that is a group action. It will be shown that is a homomorphism. By definition, . Moreover, , where the product denotes function composition, the operation of the symmetric group . Since is a group action, it follows that , therefore , so is a group homomorphism.

Conversely, assume that is a homomorphism. It will be shown that is a group action.

The identity belongs to the kernel of ; , whence , so , with being the identity function in . Furthermore, , so , which means that . The property of identity is thus satisfied for .

Since is a homomorphism, it follows that , so . Finally, , hence the compatibility property of is also satisfied, and is a group action.

]]>Elephants, and now, kookaburras never forget.

According to Mashable, one of the birds recently ended up in the pool of a Sydney man named Majid Shahen. Shahen scooped the drowning bird out of the water and immediately started giving the animal CPR.

After several seconds of mouth-to-mouth, Shahen decided that an air pump would do a better job. His quick thinking saved the threatened bird’s life, as the harrowing video above, filmed by his child, shows.

In lieu of sending a gift basket in thanks, the kookaburra has opted to drop in on Shahen to show his appreciation. Shahen says the bird, who has fully recovered from his dramatic dip, has stopped in his backyard everyday since the incident to say hello.

Since the bird is becoming a fixture in Shahen’s life, he has decided to name the kookaburra George, because you should always call your friends by name.

]]>

There are basically three possible cases for the matrix in the equation system

:

- The matrix is square and non singular (its determinant is different from zero). In this case, the equation system is solved by
,

because the inverse of exists.

- The matrix is “tall” (more rows than columns) and its columns are linearly independent. In this case, we will need left pseudoinverse. If is tall (and its columns linearly independent), then the matrix is square and non-singular. It can be inverted. Indeed:
,

,

,

,

,

where is the best , the one that minimizes .

- The matrix is “wide” (more columns than rows) and its rows are linearly independent. This will call for the right pseudoinverse. If is wide (and its rows linearly independent), then the matrix is square and non-singular. It can be inverted. Let’s see:
,

,

,

.

The usage is to denote the pseudoinverse of a matrix as . Here, left and right do not refer to the side of the vector on which we find the pseudo inverse, but on which side of the matrix we find it. As you know, matrix product is not commutative, that is, in general we have . When the matrix is square and non-singular, the normal inverse and the right and left pseudoinverse coincide. We have . Otherwise, depending on whether A is tall or wide, we either have or .

*

* *

So what prompted me to revisit the pseudoinverse like this? Well, I was looking into old notes about quadraphonic sound, and realized that since the “compression” matrix (the one that maps four channels onto two) is “wide”, the derivation used for the channel mixing experiment (the otter story) doesn’t quite work. In fact, it doesn’t at all in this case! I foolishly relied on Mathematica, who found the correct right pseudoinverse, and use that pseudoinverse. The derivation for the *left* pseudoinverse was correct, but I needed the *right* pseudoinverse. Nemo est perfectus.

Let *A* be “Mathematics”, and let *B* be “Data Science”. This is certainly not the first article vying for attention with the latter buzzword, so I’ll go ahead and insert a few more here to help boost traffic and readership:

Analytics, Machine Learning, Algorithm,

Neural Networks, Bayesian, Big Data

These formerly technical words (except that last one) used to live solidly in the dingy faculty lounge of set *A*. They have since been distorted into vague corporate buzzwords, shunning their well-defined mathematical roots for the sexier company of “synergy”, “leverage”, and “swim lanes” at refined business luncheons. All of the above words have allowed themselves to become elements of the nebulous set *B*: “Data Science”. As the entire corporate and academic world scrambles to rebrand themselves as members of Big Data™, allow me to pause the chaos in order to reclaim set *A*. This isn’t to say that set *B* is without its merits. Data Science is Joss Whedon, making the uncool comic books so hip that Target sells T-shirts now. The advent of powerful computational resources and a worldwide saturation of data have sparked a mathematical revival of sorts. (It is actually possible for university mathematics departments to receive funding now.) Data Science has inspired the development of methods for quantifying every aspect of life and business, many of which were forged in mathematical crucibles. Data science has built bridges between research disciplines, and sparked some taste for a subject that was previously about as appetizing to most as dry Thanksgiving turkey without gravy. Data science has driven billions of dollars in sales across every industry, customized our lives to our particular tastes, and advanced medical technology, to name a few. Moreover, the techniques employed by data scientists have mathematical roots. Good data scientists have some mathematical background, and my buzzwords above are certainly in both sets. Clearly, * *is nonempty, and the two sets are not disjoint. However, the symmetric difference between the two sets is large. Symbolically, . To avoid repetition of the plethora of articles about Data Science, our focus will be on the elements of mathematics that data science lacks. In mathematical symbols, we investigate the set A \ B.

Mathematics is simplification. Mathematicians seek to strip a problem bare. Just as every building has a foundation and a frame, every “applied” problem has a premise and a structure. Abstracting the problem into a mathematical realm identifies the facade of the problem that previously seemed necessary. An architect can design an entire subdivision with one floor plan, and introduce variation in cosmetic features to produce a hundred seemingly different homes. Mathematicians reverse this process, ignoring the unnecessary variation in building materials to find the underlying structure of the houses. A mathematician can solve several business problems with one good model by studying the anatomy of the problems.

Mathematics is rigor. My real analysis professor in graduate school told us that a mathematician’s job is two-fold: to break things and to build unbreakable things. We work in proofs, not judgment. Many of the data science algorithms and statistical tests that get name dropped at parties today are actually quite rigorous, *if the assumptions are met*. It is disingenuous to scorn statistics as merely a tool to lie; one doesn’t blame the screwdriver that is being misused as a hammer. Mathematicians focus on these assumptions. A longer list of assumptions prior to a statement indicates a weak statement; our goal is to strip assumptions one by one to see when the statement (or algorithm) breaks. Once we break it, we recraft it into a stronger statement with fewer assumptions, giving it more power.

Mathematics is elegance. Ultimately, this statement is a linear combination of the previous two, but still provides an insightful contrast. Data science has become a tool crib of “black box” algorithms that one employs in his language of choice. Many of these models have become uninterpretable blobs that churn out an answer (even good ones by many measures of performance. Pick your favorite measure–p values, Euclidean distance, prediction error.) They solve the specific problem given wonderfully, molding themselves to the given data like a good pair of spandex leggings. However, they provide no structure, no insight beyond that particular type of data. Understanding the problem takes a back seat to predictions, because predictions make money, especially before the end of the quarter. Vision is long-term and expensive. This type of thinking is short-sighted; with some investment, that singular dataset may reveal a structure that is isomorphic to another problem in an unrelated department, and even one that may be exceedingly simple in nature. In this case, mathematics can provide an interpretable, elegant solution that solves multiple problems, provides insight to behavior, and still retains predictive power.

As an example, let us examine the saturated research field of disk failures. There is certainly no shortage of papers that develop complex algorithms for disk failure prediction; typically the best performing ones are an ensemble method of some kind. Certain errors are good predictors of disk failure, for instance, medium errors and reallocated sectors. These errors evolve randomly, but always increase. A Markov chain fits this behavior perfectly, and we have developed the method to model these errors. Parameter estimation is a challenge, but the idea is simple, elegant, and interpretable. Because the mathematics are so versatile, with just one transition matrix a user can answer almost any question he likes without needing to rerun the model. This approach allows for both predictive analytics and behavior monitoring, is quick to implement, and is analytically (in the mathematical sense) sound. The only estimation needed is in the parameters, not in the model structure itself. Effective parameter estimation will effectively guarantee good performance.

There is room for both data scientists and mathematicians; the relationship between a data scientist and a mathematician is a symbiotic one. Practicality forces a quick or canned solution at times, and sometimes the time investment needed to “reinvent the wheel” when we have (almost) infinite storage and processing power at hand is not good business. Both data science and mathematics require extensive study to be effective; one course on Coursera does not make one a data scientist, just as calculus knowledge does not make one a mathematician. But ultimately, mathematics is the foundation of all science; we shouldn’t forget to build that foundation in the quest to be industry Big Data™ leaders.

~*Rachel Traylor @mathpocalypse*

]]>

**A) 54 **

**B) 64 **

**C) 62**

**D) 58 **

**E) 56**

**2) 8 14 25 46 82 ?**

**A) 132**

**B) 130 **

**C) 138**

**D)168**

**E) 172**

**3) 13 14 30 93 ? 1885**

**A) 358**

**B) 336**

**C) 364**

**D) 386**

**E) 376**

**4) 65 70 63 74 61 ?**

**A) 78**

**B) 58**

**C) 72**

**D) 76**

**E) 80**

**5) 9 11 16 33 98 ?**

**A) 350**

**B)355**

**C) 354**

**D) 364**

**E) 352**

**Note:**

**Each carrries 2 marks ( -1 marks for wrong answer)**

**Time limit: 10 minutes.**