Feedback is a concept proposed by Norbert Wiener in *The Human Use of Human Beings*. In his words it is “the property of being able to adjust future conduct by past performance.” This is key because an organism, be it biological or mechanical, “moves ahead from a known past into an unknown future…” Feedback is crucial in assisting with these future challenges. It is how experience is put to memory and use.

To adjust to changes, there needs to be a two-way form of communication. A signal is sent and received. Acknowledged by the sender, it is stored as past experience. Then adjustment can happen, then feedback can occur.

Because of this, Wiener finds applause to be a form of feedback. It is there “to establish in the performer’s mind some modicum of two-way communication.” Once established, a musician can respond to the crowd: know that she is doing a good job, know that she has to regain lost ground.

Feedback becomes interesting when we bring it into the realm of the Internet. Many would argue that two-way communication occurs on Youtube and Soundcloud. A song is put up; people can like and share the song accordingly. There is the applause. One cannot argue with that.

The question then would be where the feedback occurs. Naturally it would be in the artist. If a song was popular, viewed and liked and shared, he would find a way to write another song in a similar fashion. If it tanked he would try something different.

We could end right there, but this is swimming around and avoiding one thing: the song itself. Can a digital song perform feedback? Could it adjust to dismal ratings by finding more popular songs and mutate itself to share traits of those tunes? And what if it were a popular song? Could it find a way to lead people to other songs of the artist?

This speculation is not as far-fetched as one might think. The work of Nils Barricelli, performed more than fifty years ago, tells us that code can reproduce and evolve accordingly. Fast-forward to now. It will not be long until digital songs, via feedback, adapt for survival on the Internet. That is, if they are not already.

]]>During the program, one of the most impactful interventions was on the Friday afternoon of the first week of the program. By then, the program had run 5 days and 75 successful managers, 80%+ male, had behaved as they usually do. The intervention was simple, go to your small (6-7 people) study groups with whom you have been working for over the past week. I want the participants to take turns and tell each of the other participants what is the one thing they wished the other participant did more of, and one thing that they did less of. Focus on behaviours only, not attitude. The person receiving the feedback is not allowed to interrupt, but may request for a clarifying example in case they have not fully understood the feedback. For most of the participants, this was the first honest feedback they had received in their careers. Those providing the feedback were successful peers with no axe to grind. If you chose, you could avoid interacting with these people again as the study groups changed each week.

When they returned after an hour, I would put up the Johari window that describes the public, private, blind, and unknown self. The point was that each manager has a blind self which they need to be aware of as they go about their careers. And, of course, the size of this blind self depends on one’s self-awareness.

*https://en.wikipedia.org/wiki/Johari_window*

Almost all of us believe we are smarter, better looking, and more charming than we are. The psychological literature has demonstrated that these “positive illusions” can be powerful and functional. The problem becomes when the gap between our beliefs of ourselves and what others think gets too large. Then we can become delusional, and it can make us feel we are not valued enough by the world at large.

In fact, this blog in its earlier avatar was triggered by a conversation in the Tata executive lunch room. An executive, on whom the consensus was that he was the most “political”, proclaimed that the only kind of managers he finds difficult to work with are those who are political. This led to silence in the room as the other 6-7 colleagues looked at each other, flummoxed as to what to say next. Finally, after a pause and unable to resist, I said well there is a difference between how we perceive ourselves and how the world perceives us. Now, it may be that he was not delusional, but just attempting some good old fashioned impression management. Regardless, the Johari window is a powerful psychological concept and helpful in seeking as well as interpreting feedback. Each of us can benefit from reflecting and applying it to ourselves.

The Johari window concept was applied to countries by JWT to examine if there are some countries that have a larger “blind self” than others. The results were interesting and I include a direct quote from the report:

“What is worrying of course is that the same JWT study also shows that we think much better of ourselves than the world thinks of us. The large gap in self-perception and world perception is that we may not be very amenable to feedback about ourselves. If a man thinks highly of himself and others do not think highly of him, there is conflict and a mismatch in the egogram and the difference needs to be bridged. Columbia and India seem to have the highest opinion of themselves.”

For more on this study, please examine:

http://marketingbuzzar.com/2016/02/what-will-be-the-image-of-made-in-india/

Warmly

**Nirmalya**

**Magic Dust
**

I was recently with my ~~fiancé’s~~ forever-girlfriend’s family and things got heated and a good old-fashioned game of Rock, Paper, Scissors was needed to settle the score.

I line up face-to-face with her father and plan on using the paper-first tactic, assuming he’s comin’ in hot with the rock. I get my fist cocked and ready to go and I verbalize the chant while having the clenched fist and syncing it up with my words.

ROCK, PAPER, SCI….

And he throws out rock** WAY TOO EARLY. **

Now where I am from, you literally SAY Rock, Paper, Scissors, and then SAY SHOOT. When you say “Shoot” you throw out your choice. In other words, you throw out the Rock **on the 4th beat. **

Future Father-in-Law throws out the Rock on the 2nd beat like some sort of alien. Now, I’m looking around the table in utter disbelief because my world has just been turned upside down. The REST of the family are all on HIS side. They’re mostly from California so I’m sitting there thinking, “is this a California thing or did this father corrupt all of his family with misinformation?”

We continued on, but I literally couldn’t break the 30 year-old habit of throwing out the goods on the 4th beat. I became unhinged and lost the psychological battle.

Fast forward to this week and this movie comes on TV and they literally have a RPS battle. SHOCKED AND CHAGRINED because they threw out the rock on the 2nd beat! But my mind is in a mirror-maze pretzel because here on The Tub, we do our research and according to this 2008 RPS Championship, you throw out the goods **after** the 3rd beat.

So, I’m taking a poll here because I need answers:

]]>If we have data, let’s look at data. If all we have are opinions, let’s go with mine. –

Jim Barksdale

This blog started with a mathematical analysis of what this blog should be about, and a promise to explain why p-values aren’t part of that analysis. 14 months later, I’m bringing Putanumonit full circle by doing a mathematical analysis of what this *post* should be about, and explaining in detail why p-values suck.

In case you were worried (or hopeful), bringing the blog full circle doesn’t mean it’s ending. I plan to keep going in a straight line after closing the circle, which should make the entire blog look like the digit 6.

A couple of weeks ago I asked my readers for some greatness-enhancing feedback via survey and email. A lot of the readers who emailed me think that they are much bigger math geeks than the other readers who emailed me, which is pretty funny. But one reader actually proved it by putting a num on it:

I am a

hugemath nerd, so my interests and knowledge when it comes to this sort of thing are probably at least four standard deviations from the mean of humanity, and at least two from your reader base. –ER

ER’s email is wonderful for two reasons:

- It says
*“the mean of humanity”*where most people would say*“most people”*. - It makes an explicit, testable prediction: that my readers are 2 standard deviations math-nerdier than the mean of the human species.

If math-nerdiness is normally distributed, at +2 SDs my average reader is math-nerdier than 97.7% of humanity. Did you know that math is teens’ favorite subject in high school, year in year out? If normal humans admit (at least in anonymous surveys) to love studying math, surely my readers are on board as well.

Yep. If I was worried about making Putanumonit *too *math-nerdy, now I’m worried that it’s not going far enough for my readers. Last week I wrote a post about eating fish and my commenters responded with detailed analysis, math jokes, mental calculation tricks and extensions to famous math puzzles. I feel a bit inadequate, my favorite subjects in school were gym and French.

The only metrics that entrepreneurs should invest energy in collecting are those that help them make decisions. Unfortunately, the majority of data available in off-the-shelf analytics packages are what I call Vanity Metrics. They might make you feel good, but they don’t offer clear guidance for what to do. –

Eric Ries

There’s a point to the opening part of this post, and that’s to demonstrate that data analysis **should have a point**. The point of the survey above was to help me decide if I should write more or less mathematical detail in my posts, and it doesn’t take much analysis to see that the answer is “way more”. OK then, I certainly will.

We’ll get to some of the other survey results, but I first want to step back and take a broader view of data analysis. In that view, almost all data analysis you see is conducted with one of two goals in mind:

- Drive a decision.
- Get published in an academic journal.

You may have heard of the war raging between the armies of Bayesian and frequentist analysis. Like all global conflicts, this war reached its peak with a clever XKCD comic and its nadir with a sensationalistic New York Times story.

The skirmishes in this war today consist mainly of Bayesians showing frequentists why their methods (i.e. p-values) are either wrong or secretly Bayesian, and salty frequentists replying that the methods work just fine, thank you. In some sense they’re right: as long as academic journals are happy to accept papers based on p-values, getting a p-value is a great way of achieving that goal. It’s the first goal where frequentist analysis suffers, and a more mindful technique is needed.

Properly done, decision-driven analysis should work backwards from the question to the data:

**Step 1** – **Formulate the question or decision** you want to answer with data.

*Should I trim my posts to make them shorter or run with longer articles?*

**Step 2 – Quantify alternative possibilities,** or possible answers to the question. Each alternative is a model of how the world looks like in the narrow realm you’re interested in. The goal of the data is to promote one of the alternatives above the rest, so each alternative should correspond to a decision option or an answer to the main question.

Going from the decision I want to make, I will change the length I aim for in my posts if every fourth additional reader will be pleased by it. Specifically:

*Alternative 1 – I would write longer posts (*

*Alternative 2 – I would write shorter posts (*

*Alternative 3 – I would keep the same length of posts (*

**Step 3 – Put a number on how likely each alternative** **is**. ** **This should be based on your background knowledge, whatever information you have *before *the new data comes in. This is called the prior, and for some reason it makes more people uncomfortable than Richard Pryor. We’ll get to both later.

*My most popular posts were on the longer side, but not overwhelmingly so. I think people probably like both the same, and maybe lean towards longer posts. My priors are 30% for Alternative 1 that longer is better (that’s what she said), 10% for Alternative 2 and 60% for Alternative 3**.*

**Step 4 – Use the new evidence to update your prior probabilities in accordance with Bayes’ rule** and arrive at the *posterior*, or the outcome. Make your decisions accordingly.

The verbal logic of Bayes’ rule is that whichever *alternative* gave the highest probability of seeing the *evidence* we actually observed is the one most supported by the evidence. “Support” means that the probability of the alternative increases from its prior. I’m going to demonstrate the use of Bayes’ rule, but I’m not going to explain it further. That’s because **Arbital’s guide to Bayes’ rule** does such an excellent job of explaining it to any level of skill and need that I don’t want to step on their turf. If you have any doubts, big or small, about what the rule says and why it works go ahead and spend some time on Arbital. I’m not going anywhere.

You’re back? Good. I assume you’re hip to Bayes now and we can jump straight into the calculation.

We have three competing hypotheses and prior probabilities on all three. I’m going to summarize the prior probabilites in odds form, because that’s the easiest form to apply Bayes’ rule to. Our odd ratios for Alternative 1 (longer is better):Alternative 2 (shorter):Alternative 3 (same) are 30%:10%:60%, or 3:1:6.

All we need now is the evidence. I have combined a couple of survey question and their responses to get this basic breakout:

Longer is better | 131 | 34% |

Equally good | 183 | 47% |

Shorter is better | 72 | 19% |

Total |
386 |
100% |

The gap between people who prefer longer posts to shorter ones is 34% – 19% = 15%. That wasn’t one of our three alternatives, but it’s definitely closer to some than to others. Ideally, instead of picking a few discrete possibilities we will have a prior and a posterior on a *continuous **distribution *of some parameter (like the long-short gap). I’ll go through a continuous-parameter analysis in the next post.

We now need to calculate the probability of seeing this result given each of the three alternatives, *P(131-72 long-short split | alternative N)*. I whipped up a very quick and dirty R simulation to do that, but with a couple of assumptions we can get that number in a single Excel line. To make life easy, we’ll assume that the number of people who show no preference is fixed at 183 out of the 386, so our alternatives boil down to how the other 203 readers are split.

Alternative 1 – There’s a 25% advantage to long posts, and 25% of 386 responders is 97. So this alternative says that out of the 203 who show a preference, 97 more will prefer long posts. This means that these 203 readers will be split 150 in favor of long, 53 in favor of short. 150/203 is 74%, so this alternative predicts that 74% of 203 prefer long posts.

Similarly, Alternative 2 says that 53 of 203, or 26% prefer long posts, and Alternative 3 says that 50% of them do (i.e. an equal number prefer long and short). Now we simply treat each responder as making an independent choice between long and short with a fixed probability of 74%, 50% or 26%. For each option, the chance of getting our observed 131-72 split is given by the binomial distribution which you can calculate in Excel or Google Sheets with the BINOMDIST function.

P(evidence | Alternative 1) = P(131 long – 72 short | 74% for long) = BINOMDIST(131,203,0.74,0) = .0007

P(evidence | Alternative 2) = P(131 long – 72 short | 26% for long) = BINOMDIST(131,203,0.26,0) ≃ 0, as far as Google Docs can calculate. Let’s call this **ε.**

P(evidence | Alternative 3) = P(131 long – 72 short | 50% for long) = BINOMDIST(131,203,0.5,0) = .0000097

The likelihood ratios we get are 0.0007 : **ε** : 0.0000097 = 72:**ε**:1. We’ll deal with the **ε** after calculating the posterior odds.

Posterior odds for alternatives 1:2:3 = prior odds * likelihood odds = 3:1:6 * 72:**ε**:1 = 216:**ε**:6. We can go from odds back to probabilities by dividing the odds of each alternative by the total odds. 216 + 0 + 6 = 222 so:

Probability that a lot more people prefer long posts = 216/222 = 97%.

Probability that an equal number prefer long and short posts = 6/222 = 3%.

The probability that a lot more people prefer *short* is bounded by the Rule of Psi we formulated in the last post:

**Rule of Psi** (posterior edition)- A study of parapsychological ability to predict the future produced a p-value of 0.00000000012. That number is only meaningful if you have absolute confidence that the study was perfect, otherwise you need to consider your confidence *outside* the result itself, i.e. the probability that the study is useless. If you think that for example there’s an **ε** chance that the **result is completely fake**, that **ε** is roughly the floor on your *posterior probabilities*.

There’s at least a 0.1% chance that my data is useless. Either I formulated the questions wrong, or the plugin didn’t count them, or someone who loves long posts voted 200 times to confuse me. This means that we shouldn’t let the data push the probability of any alternative too far below 0.1%, so we’ll set that as the posterior for more people preferring short posts.

Our simple analysis led us to an actionable conclusion: there’s a 97% chance that *a lot* of my readers prefer longer posts, so I shouldn’t hesitate to write longer posts. What’s important to notice is that this decision is driven completely by the objective evidence, not by the prior. Let’s imagine that prior had been pretty far from the truth: 10% for longer posts, 45% for shorter, 45% for equal. The posterior odds would be: 10:45:45 * 72:**ε**:1 = 720:**ε**:45. The evidence would take Alternative 1 from 10% to 720/765 = 94%. I would have had to give the true alternative a tiny prior probability (<1%) for the evidence to fail to promote it. 30%:10%:60% was a pretty unopinionated prior, I made all the probabilities non-extreme to let the evidence dictate the posterior.

The prior doesn’t have to be perfect, it just needs to give a non-tiny probability to the right answer. When using Bayesian inference, **sufficient evidence will promote the correct answer unless your prior is extremely wrong.**

Bottom line: the evidence tells me to write long posts, which the survey defined as having 3,000 words. I’m only 1,800 words in, so let’s take a second to talk about art.

A lie is profanity. A lie is the worst thing in the world. Art is the ability to tell the truth.

– Richard Pryor (I told you we’d get back to him)

I chose this quote not just because of Pryor’s very Bayesian surname, but also because I wanted to write the first essay ever that quotes both Richard Pryor and Roger Peng, two men who share little in common except for their initials and a respect for truth. Peng created the JHU data science specialization on Coursera (well explained and not very challenging) and wrote *The Art of Data Science. *Here’s how he explains the title of the book:

Much like songwriting (and computer programming, for that matter), it’s important to realize that

data analysis is an art. It is not something yet that we can teach to a computer. Data analysts have manytoolsat their disposal, from linear regression to classification trees and even deep learning, and these tools have all been carefully taught to computers. But ultimately, a data analyst must find a way to assemble all of the tools and apply them to data to answer a relevant question—a question of interest to people.

I made several choices in the previous section that could all be made differently: picking the three alternatives I did, or using a simplified binomial distribution for the likelihood. I made these choices with the goal of* trying to tell the truth*. That involves getting c*lose to the truth* (which requires using good enough methods) and being able to *tell* *it* in a comprehensible way (which requires simplifying and making assumptions).

The procedure I outlined isn’t close to the Bayesian ideal of extracting maximum information from all available evidence. It’s not even up to industry standards, because unlike many of my readers I am not a professional data scientist (yet, growth mindset). But the procedure is easy to emulate, and I’m pretty sure the answer it gave me is true – people want longer posts.

And yet, this is very useful procedure is only taught in the statistics departments of universities. Cross the lawn and you’ll find that most other departments, from psychology to geology, from business school to the med school, teaches something completely different: *null hypothesis testing*. This involves calculating a p-value to reject or accept the mythical “null hypothesis”. It’s a simple method, I learned it well enough to get a a perfect score on the statistics final, and the following year I helped indoctrinate new students into null hypothesis testing as a teaching assistant. But since then I spent a good while thinking about it, and I came to realize that **null hypothesis testing is hopelessly inadequate in getting close to the truth**.

Null hypothesis testing fails to divulge the truth for three main reasons:

- It asks
*“is the null hypothesis accepted or rejected?”*, which is**the wrong question**. - It calculates a
*p-value*, which is**the wrong answer**. - Both the null and the p-value aren’t even relevant to what
**the truth actually looks like**.

These are some **bold** claims, so let’s inspect them through the example of a serious research study, and not just my piddling survey. Here’s a study about differences in intelligence between the first and second-born sibling. What does null hypothesis testing make of it?

Hypothesis testing only knows to ask **one question: Is the null rejected? **The answer to this is a resounding “yes”:

But wait, isn’t a better question to ask is: *How much smarter are older siblings?* The answer is 1.5 IQ points. That is smaller than the 4-6 point average difference between two IQ tests taken by *the same* *person* two years apart. It’s a mere fraction of the 30 point hit a person’s IQ takes the moment they start typing in a YouTube comment. The answer to the better question is: *imperceptibly.*

Another question could be: *What are the chances that in a given family the older sibling has higher IQ?* The answer to that is 52%, practically a coin flip. Any relevant information will tell you more about relative IQ than birth order, for example the fact that my brother studies chemistry and writes poetry, while I got an MBA and write about Pokemon.

So how did the study get such an impressive p-value? Because their sample size was 3,156. With such a huge sample size, you can get “statistical significance” with or without *actual significance*. Whatever your goal was in studying birth order and intelligence, answering “is the null hypothesis rejected?” is usually useless in achieving that goal.

The answer you get, a p-value, is pretty meaningless as well. Does a p-value of 5% means that you’re wrong in rejecting the null 5% of the time? Nope! You’re going to be wrong somewhere between 30% and 80% percent, depending on the power of the study. Does it even mean that if the null is true you’ll see the same result 5% of the time? Nope again! That’s because the p-value is calculated not as the chance of seeing the actual effect you measured, but as the chance of *getting a result at least as big (technically, as far from the null) as the measured effect. *

Calculating anything for the group of *“effects at least as big as the measured effect”* is absurd for two reasons:

- This group
**isn’t representative of the****measured effect**because the measured effect is an edge case within that group. In the IQ example, the p-value is the chance, given that the null is true and also given a bunch of assumptions, that the older sibling is smarter by 1.5 points, or 2 points, or 27 points, or 1500 points. It’s a huge range of outcomes, most of which are a lot farther away from 1.5 than 1.5 is from 0 (the null). If we assumed 0 and got 1.5, why are we calculating the probability of getting 27?Moreover, the assumptions that underlie the p-value calculation (such as a normal distribution) are very hard to justify over that huge range of 1.5 to infinity. Many distributions resemble a normal bell curve close to the middle but look very different in the tails. The “range of IQ differences bigger than 1.5” (the right tail) looks little like the data we actually have which is “an IQ difference of exactly 1.5”. - This group for the most part
**doesn’t even include the actual effect**, which is usually smaller than the measured effect. If you measured a positive effect, the measurement error was likely in that positive direction as well. When we subtract the error from the measured result to get the**true result**, we get a small number which is completely outside the range of the bigger numbers we measured the p-value on.

The lower the power of the experiment, the worse this problem gets, and power is often pretty low. With the graph below Andrew Gelman shows just how bad this gets when the statistical power = 0.06, which is what the “Ovulating married women vote Republican” study had:

This is why in the likelihood calculation I used the precise result of “exactly 131 people prefer long posts”, not *“131 or above”.* P-value calculations don’t work with point estimates, but “or above” ranges will break even likelihood calculations.

Let’s try this on the survey results. If we tried null-rejection testing, whether I chose “50% prefer long posts” or “74% prefer long posts” as my null hypothesis, the actual result of “131/203 prefer long posts” would *reject the null with p<0.002*. Again, rejecting the null doesn’t tell you much.

When I calculated the likelihood given by “131 prefer long”, I got a ratio of 72:1. If instead I had calculated the likelihood based on “131 *or more *prefer long”, the likelihood ratio in favor of Alternative 1 would have been 47,000:1. Here’s the fun part: if I had looked at “131 or below”, the likelihood ratio would have been 1:1730 *in favor of the opposite alternative*!

Choosing a direction for the range of outcomes can arbitrarily support any hypothesis you come up with, or reject it. But that’s exactly what having a single null hypothesis does, it defines an arbitrary direction: if the measurement landed above it you look at that result *or above*, and vice versa. That to me is the main issue with null hypothesis testing – a single hypothesis with no alternatives can’t bring you to the actual answer.

When you have several alternatives, gathering more evidence always brings you closer to the truth. “25% more people like long posts” isn’t the *exact* truth, but it’s probably closer than the other two options. When you have a continuous distribution of alternatives, you do even better as the evidence zeroes the posterior in on the real answer.

But if you just had one null and you rejected it, your conclusions depend more on the choice of null (which can be quite arbitrary) than on what the evidence tells you. Your company released a new product and it sold 10,000 copies, is that good or bad? That entirely depends on whether your null hypothesis was 5,000 copies or 20,000. Should the null hypothesis be the average sales for any product your company makes? But this product had a smaller niche, a larger advertising budget, and twice the R&D cost. It makes no sense to compare it to the average.

The best you can do is to incorporate all available information about the customers, the price, the advertising and the cost. Then, use all those factors to calculate a predicted sales volume, along with a range of uncertainty around it. But that’s not a null hypothesis anymore, that’s the ** prior**, the one that frequentists don’t like because it feels subjective.

Null-worship isn’t a strawman, it happens to Nobel-caliber scientists. If you remember my essay about hot hand shooting, Tom Gilovich and Amos Tversky hypothesized that basketball players shoot 0% better on their second free throw, and made that the null hypothesis because 0 is a nice round number that pretends to be objective. A bunch of random fans estimated the difference to be 8%. The actual answer came out to **4%**, and the scientists concluded that **there’s no improvement because their sample was too small for the null to be rejected at p<0.05**!

In absolute terms, both fans and scientists were off by 4%. In relative terms, the fans overestimated the effect by a factor of **2**, while the scientists underestimated the effect by a factor of **infinity**. Gilovich and Tversky didn’t declare victory because they were closer to the truth, but because they got to pick the null.

Is there any use to null hypothesis testing at all? Yes, but it’s the opposite of how most people use it. In common practice, if someone rejects the null, they declare victory and publish. If they don’t reject the null, they p-hack until the null is rejected and *then* declare victory and publish.

Instead, null hypothesis testing can be a quick check to see if your data is worthwhile. If you fail to reject the null, your data isn’t good enough. There’s too much noise and too little signal, you need better data or just a lot more of it. If you *do* reject the null, your data might be good enough to yield an answer once you throw out the p-value and start doing some actual statistical inference.

This post was nerdy, and it was long, and it’s time to wrap it up.

Data analysis is both an art *and *a science. There’s no single recipe for doing it right, but there are many ways to do it *wrong*, and a lot of the latter ways involve p-values. When the questions falls to you, stick to a simple rulebook:

Rule 1 – Focus on the ultimate question you’re answering, the one that drives the decision. For every step in the analysis, ask yourself – is this step helping me answer the ultimate question?

Rule 2 – Before you see the data, come up with alternative hypotheses and predictions. Base these predictions on your knowledge of the issue, not on objective-looking round numbers.

Rule 3 –

*Featured image credit: Shadi Yousefian*

BUY IT HERE – **BANDCAMP**

20/01/2017 Julia Shin Good Excellent Good Good

20/01/2017 Angelina chong Good Excellent Excellent Good

20/01/2017 Jenny choi Excellent Excellent Excellent Good

]]>

- Retrieved the higher quality camera from the lender who gave the group instructions on how to work the camera.
- Researched new ways we could use different media, for example Heather came up with the idea to record an editing sesssion via quicktime player on the computers
- Researched what to include on posters by researching past Bastille posters and then we reviewed possible poster ideas.

]]>

You want the thousand words? Well ok then.

*The year was 2017, and in a small city, a man sat in front of his computer. Snow fell, but it fell elsewhere, and this was not important to him right then. In this poorly described scene, our protagonist made a decision, changes tense mid sentence and is now wrangling with the edits required for his work in progress…*

Wait, what? You wanted the picture? Sorry, I misheard you.

Aaaanyway, so yeah. I expected I would need to make a fair few changes. The process would be a marathon rather than a sprint. But it turns out that comparing it to an iceberg with massive hidden depths to the work required is more apt. Now I know I need to do huge amounts. The original plan was to take the 24 chapters I had written, and hammer them into a more fluent 24 chapters. I still need to do this. However, thanks to some brilliant feedback from my beta readers, I have decided I need some additional chapters as well.

Main character Jen spends a fair amount of the book at work. This is mainly because a greek god has suggested her assistant manager is trying to trigger the apocalypse. However, there is definitely room for some more diversity, and I intend to follow beta reader advice and give more of a view of Jen’s home life. I’m currently looking at how I can link this to the main plot and subplots.

My plan now looks a little something like the picture above. I’ve made some fairly detailed notes on each chapter in addition to this. I’ve jotted down places where the writing is awkward, clumsy or lazy. I’ve also collected beta reader criticisms of each chapter, and made general notes on how I might address the issues. I’ve corrected a few obvious typos, but I did not spend huge amounts of time on this, as I plan to do so much rewriting.

Crucially, I’ve decided where I want new chapters to fall. This is not an immutable decision, as I still have to plan the new sections. Once I start making notes on the additional scenes, I’ll get a better feel for if they’ll fit where I want them to. Up to this point, I’ve mainly been analysing my manuscript. I’ve found what needs improvement. Now, it’s time to move onto the more difficult part.

Fixing the flaws.

The most difficult part is going to be adding in the new chapters, so I intend to start there. I’ll be structuring them out using the amazing scene planner found on LiveWriteThrive (Run by author C.S. Lakin, her site and resources really are excellent). I want to start actually writing these chapters when February arrives, so I have just under two weeks to get this done.

I’ll let the new material sit once it’s done, and get into editing the stuff from the first draft. I’ll be keeping an eye out for what needs to change to accommodate the additional chapters. As well as specific issues to correct with each chapter, I have goals for the manuscript as a whole. These include adding depth and background to secondary characters, more description of locations, and working out how to describe clothing without going into too much detail.

I expect that iceberg of editing work is going to keep appearing bigger as I continue revising *Generic Fantasy*, but hopefully I can keep upbeat about the project, and find myself with a much improved second draft as a reward.

Thanks for reading, and I hope your own projects are going to plan.

Featured image courtesy of pixabay.

]]>*“No mistakes, no learning.”*

I can’t forget the day I met Candy for the first time. She was a bit reserved. I tried to make her smile; but she seemed too shy to smile back. She was so reluctant to express her emotions that I needed to remind her to be comfortable with me. “The more you enjoy the lesson, the more you learn,” I said. She was very scared to make mistakes, too, so I kept reminding her: “No mistakes, no learning.”

Despite several meetings, I persisted in making her feel comfortable. I conducted my lessons with humour until I saw her very first smile. It was beautiful! And I let definitely her know that.

The lessons that followed began to get more exciting and fun for both of us. I hear her chuckle at the first sound of a funny ice breaker. She has grown so fond of me. I make sure that she absorbs the lesson. And I always praise her awesome performance. She never leaves without thanking me. Candy has touched my life in so many beautiful ways. Her confidence pushed me to deliver better lessons every day.

We start the lesson by talking about her day in school. I always commend her every achievement. She is also a potential painter. I can tell how artistic she can get by just looking at the paintings and drawings she sends me.

I’ve always known she has the potential to be a fine English speaker: she has a good command of her vocabulary, her diction is great (though, her intonation needs improvement) and she comprehends the lessons well.

One day, her mom broke the good news. Candy is the best in her English class! She credits it to my effort in teaching her daughter. But I commend the love and support they gave their daughter which inspired her to strive more in her studies, plus the fact that Candy is truly gifted with intelligence and talent.

I thank the training I got from 51Talk, the trainers who walked me through it all, but most of all, I am thankful to the ONE up there who directs my every single teaching day. All these have made me what I am now as a teacher. And I know that I will continue to teach in 51Talk because this is where I belong.

**Marcelinny is a five-star Senior Teacher B at 51Talk ****since October 2015 ****and has conducted 2,919 lessons, or a total of 72,975 minutes at 51Talk.**

]]>

I would like to comment on the steam made body wave and straight hair both have been selling well clients have come back for repeat installs.

The Prices of natural are simply high but quality is superior too. I have celebrity clients who want no compromise with their looks. They are willing to pay if they are getting something at par with their high standards.

Totally satisfied with my samples and the first wholesale order.

Christopher Leigh

]]>Thank you

Zubaida

]]>Sofia

]]>Janet

]]>Amanda, UK

]]>Crystal

]]>Cherry.

]]>Agatha Nigeria

]]>Josephine

California, USA

To tell you folks about the hair he makes I would say you can without doubt order Natural unprocessed raw curly hair. Just amazing quality. Hair is soft. Curls are natural. I have personally used them. There is no relaxing of the curls. Even after I created two tone sombre the curls maintained the pattern.

Here I also must point out that the cost of natural curls is excruciatingly high. But worth every penny and pound. Shedding is almost zero. There are no tangles. Curls as we know are difficult to manage but the raw curls I have been receiving from him are just impeccable. No tangles. None girls.

Here I would also like to mention bodywave and straight hair in the steam virgin hair he sells. In comparison to prices of natural curls the virgin hair steam quality is only a fraction of the price. Really low price. True the steam last just about 6 to 8 months but the price is good.

Written By

Elizabeth Bates

United Kingdom

]]>Dubai studio

]]>Thank you

Zubaida

]]>