Statistics — Tests of independence

Tests of independence:

Basic principle is the same as ${\chi}^2$ – goodness of fit test
* Between categorical variables

${\chi}^2$-square tests:

The standard approach is to compute expected counts, and find the
distribution of sum of square of difference between expected counts and ordinary
counts(normalized).
* Between Numerical Variables

${\chi}^2$-square test:

  • Between a categorical and numerical variable?

Null Hypothesis:

  • The two variables are independent.
  • Always a right-tail test
  • Test statistic/measure has a ${\chi}^2$ distribution, if assumptions are met:
  • Data are obtained from a random sample
  • Expected frequency of each category must be
    atleast 5
  • ### Properties of the test:
  • The data are the observed frequencies.
  • The data is arranged into a contingency table.
  • The degrees of freedom are the degrees of freedom for the row variable times the degrees of freedom for the column variable. It is not one less than the sample size, it is the product of the two degrees of freedom.
  • It is always a right tail test.
  • It has a chi-square distribution.
  • The expected value is computed by taking the row total times the column total and dividing by the grand total
  • The value of the test statistic doesn’t change if the order of the rows or columns are switched.
  • The value of the test statistic doesn’t change if the rows and columns are interchanged (transpose of the matrix

The mystery of short term past performance versus future equity fund returns

The Eighty Twenty Investor

In our earlier posts, here and here, we found to our dismay that, our natural inclination to choose the top mutual fund performers of the past 1 & 3 years hasn’t worked too well.

That leaves us with the obvious question..

What actually goes wrong when we pick the top funds of the past few years?

 The rotating sector winners..

Below is a representation of the best performing sectors year over year. What do you notice?

Sector wise calendar year performance.png

The sector performance over each and every year varies significantly and the top and bottom sectors keep changing dramatically almost every year.

Sample this:

  • 2007 – Metals was the top performer with a whopping 121% annual return
  • 2008 – Metals was the bottom performer with a negative 74% returns & FMCG was the top perfomer (-21%)
  • 2009 – The tables turned! FMCG was the bottom performer (47%) while Metals was the…

View original post 1,399 more words

Share: Harry Potter and the Methods of Rationality

Is there some amazing rational thing you do when your mind’s running in all different directions?” she managed.
“My own approach is usually to identify the different desires, give them names, conceive of them as separate individuals, and let them argue it out inside my head. So far the main persistent ones are my Hufflepuff, Ravenclaw, Gryffindor, and Slytherin sides, my Inner Critic, and my simulated copies of you, Neville, Draco, Professor McGonagall, Professor Flitwick, Professor Quirrell, Dad, Mum, Richard Feynman, and Douglas Hofstadter.”
Hermione considered trying this before her Common Sense warned that it might be a dangerous sort of thing to pretend. “There’s a copy of me inside your head?”
“Of course there is!” Harry said. The boy suddenly looked a bit more vulnerable. “You mean there isn’t a copy of me living in your head?”
There was, she realized; and not only that, it talked in Harry’s exact voice.
“It’s rather unnerving now that I think about it,” said Hermione. “I do have a copy of you living in my head. It’s talking to me right now using your voice, arguing how this is perfectly normal.”
“Good,” Harry said seriously. “I mean, I don’t see how people could be friends without that.”
She continued reading her book, then, Harry seeming content to watch the pages over her shoulder.
She’d gotten all the way to number seventy, Katherine Scott, who’d apparently invented a way to turn small animals into lemon tarts, when she finally worked up the courage to speak.

Sleeper Theorems

This inspired me to compile a list:
Since, I’m not a mathematician(pure/applied) I just compiled things from the blog post combining
with the comments:
* Bayes Theorem P(A|B) = P(B|A) *P(A)/P(B)

  • Jensen’s Inequality \psi(E(X)) <= E(\psi(X)) if \psi is a convex function and X is
    a random variable. Extends convexity from sums to integrals(aka discrete to continuous)
  • lto’s lemma: aka(Merton, Black and Scholes option pricing formula)
  • Complex analysis.. should i disqualify this as not a theorem??
  • Standard error of the mean.details link
  • Jordan Curve Theorem: A closed curve has an inside and an outside. (sounds obvious in 2D
    and 3D, perhaps with time as 4D, keeping options open is staying outside closed curves??)
  • kullback-leibler positivity:(no clue need to look up wolfram alpha or wikipedia)
  • Hahn-Banach Theorem (again needs searching)
  • Pigeon-Hole principle link here
  • Taylor’s theorem, (once again continuous function approximated by sum of discrete
    components/expressions) Used in:
  • Approximating any function with nth degree precision
  • Bounding the error term of an approximation
  • Decomposing functions into linear combinations of other functions
  • Kolmogorov’s Inequality for the maximum absolute value of the partial sums of a sequence of IID random variables.( the basis of martingale theory)
  • Karush-Kuhn-Tucker optimality conditions for nonlinear programming, link
    here
  • Envelope Theorem — from economics
  • Zorn’s lemma , also Axiom of Choice
  • Fourier Transform and Fast Fourier Transform

Elbow Method

What is elbow method?:
Elbow method So elbowing is this mechanism of
social reiforcement/communication about something that is generally considered bad to say
aloud or is too subtle to try to find words for.

Okay, just kidding, while that’s kinda true, I was just pranking on y’all. What I want to
talk about is a stats/math/Machine Learning method used when trying to find clusters in a
given dataset. So [Elbow Method] (https://en.wikipedia.org/wiki/Elbow_method_(clustering))
is basically a measure/method for interpretation and validation of conistency of a cluster.
Ugh.. the original sentence in Wikipedia is so long with all 10-letter words, I couldn’t
even type it again.(Above attempt was simplified during typing-on-the-fly)

The basic issue is that, during a cluster analysis we need to settle on a few things:
* A measure for distance within, across and between clusters and points in the
clusters

  • A method/algorithm for updating, re-assigning the points to clusters.
  • Optional: A formula for guessing the number of algorithms. In most cases this is
    optional, and parameterized.

In the case of elbow method it is a visual method for the third option. Basically, it’s a
ratio of variance (within clusters) divided by overall variance. So it explains how much(or
what %)of
the total variance is explained by choosing “n” number of clusters.

The name elbow method comes from visually plotting the number of clusters Vs the ratio(% of
variance explained) and finding that point where there’s an acute bend(if no.of.clusters is
in X-axis), picking the number of clusters at that point.

Share: Harry Potter and the Methods of Rationality

The world around us redunds with opportunities, explodes with opportunities, which nearly all folk ignore because it would require them to violate a habit of thought; in every battle a thousand Hufflepuff bones waiting to be sharpened into spears. If you had thought to try a massed Finite Incantatem on general principles, you would have dispelled Mr. Potter’s suit of chainmail and everything else he was wearing except his underwear, which leads me to suspect that Mr. Potter did not quite realize his own vulnerability. Or you could have had your soldiers swarm Mr. Potter and Mr. Longbottom and physically wrest the wands from their hands. Mr. Malfoy’s own response was not what I would term well-reasoned, but at least he did not wholly ignore his thousand alternatives.” A sardonic smile. “But you, Miss Granger, had the misfortune to remember how to cast the Stunning Hex, and so you did not search your excellent memory for a dozen easier spells that might have proved efficacious. And you pinned all your army’s hopes on your own person, so they lost spirit when you fell. Afterward they continued to cast their futile Sleep Hexes, governed by the habits of fighting that had been trained into them, unable to break the pattern as Mr. Malfoy did. I cannot quite comprehend what goes through people’s minds when they repeat the same failed strategy over and over, but apparently it is an astonishingly rare realization that you can try something else. And so the Sunshine Regiment was wiped out by two soldiers.” The Defense Professor grinned mirthlessly. “One perceives certain similarities to how fifty Death Eaters dominated all of magical Britain, and how our much-loved Ministry continues in its rul

F-test

We’ve already seen what F-score is. Now let’s see what
F-test. Side note: I came across it when I was writing
Elbow Method and my thoughts were, cool another F-word for my readers, so

Here you go:

  • F-test is any stats test that uses F-distribution

  • It is often used when comparing stats models that have been fitted to a data set.. Ahh.. That
    sounds no different from F-score then.. May be just different
    fields(Statistics and Machine Learning) have different naming conventions?? Anyway two different
    F-words.. So let’s just say what F-score/test?? Why two names for samething and move on…

Examples:

  • Null Hypothesis: Means of a given set of normally distributed populations all having same standard deviation being equal.(used in ANOVA)

  • The hypothesis that a proposed regression model fits the data well.

  • The hypothesis that a data set in a regression analysis follows the simpler of two proposed linear models that are nested within each other.

  • It(non-regression type) is also a test of homoskedasticity

Drawbacks:

Formula

  • Formula: explained variance/un-explained variance or between-group-variability/within-group-variability Ok. That doesn’t sound like the F-score

  • Formula(for regression models): ((RSS\_1 - RSS\_2) / (p_2 - p_1)/(RSS_2/(n-p_2)))

Visualization Grammar — VEGA

VEGA:

A visualization grammar, a language for:
* — creating, saving and sharing interactive visualization designs
* — describe visual appearance and interactive behaviour of visualization in json
* — reactive signals that dynamically modify a visualization in response to input
event streams

Key Semantics:

The key semantics are:
* — width, height, padding, autosize (all are for specifying the size of the
visualization)
* — data (an array of data definitions, can define type, name, stream, url, and values of the
data type)

  • — scales (Configurations for as to map columns of data to pixel positions or
    colors, or type of representation(for ex: categorical==> bands etc)).

  • — axes (Configuration of axes)

  • — marks (Graphical primitives, which are used to encode data. Has properties
    position, size, shape, color. Examples are: dot, circle, rectangle(bar-chart),
    star etc..)

  • — Have sub properties encode which marks the graphical primitives
  • — Encode’s Sub property enter and exit configure interactive parts when
    the mark is added or removed.
  • — marks sub property hover, update configure overall interactive parts
  • — each of the hover, update properties can be triggered/linked to signals
    and changed accordingly
  • — A special type of mark called group is present and can contain other
    marks(for composition of graphical primitives to create complex ones)

  • — signals (act as dynamic variables, or as event-listeners to use js parlance)

  • — Has sub property event streams
  • — Can set dynamically evaluated variables as values on events as
    defined.
  • — Events can be mouse over, mouse out, click,drag etc..
  • — Event streams
  • — Has sub properties source, type, marktype, between, consume, filetr etc.
  • — Each sub property decides which mark to change/update, based on which
    event type/user-action/data-change
  • — Event streams also have CSS-style selectors

  • — Legends

  • — Can create legends for the visualizatinos
  • — customize them with sub properties type, orient, fill, opacity, shape

  • — Transforms

  • — As the name implies it can transform data streams
  • — Has sub properties ilke filter, stack, aggregate, bin, collect, fold,
    impute etc