Dirichlet Distribution

Dirichlet Distribution(DD):
* — Symmetric DD can be considered a distribution of distributions
* — Each sample from Symmetric DD is a categorical distribution over K categories.
* — Generates samples that are similar discrete distributions
* — It is parameterized G0, a distribution over K categories and \alpha a scale
factor
<code>
import numpy as np
from scipy.stats import dirichlet
np.set_printoptions(precision=2)

def stats(scale_factor, G0=[.2, .2, .6], N=10000):
samples = dirichlet(alpha = scale_factor * np.array(G0)).rvs(N)
print ” alpha:”, scale_factor
print ” element-wise mean:”, samples.mean(axis=0)
print “element-wise standard deviation:”, samples.std(axis=0)
print

for scale in [0.1, 1, 10, 100, 1000]:
stats(scale)
</code>

Dirichlet Process(DP)

  • — A way to generalize Dirichlet Distribution
  • — Generate samples that are distributions similar to the parameter H_0
  • — Also has the parameter \alpha determines how much will the samples
    vary from H0

  • — a sample H of DP( \alpha, H_0) is constructed by drawing a countably
    infinite number of samples \theta k and then setting
    H = \sum_{k=1}^{\infty} \pi_k * \delta(x-\theta_k)

where \pi_k — is carefully chosen weights that sum to 1
\delta — is the Dirac delta function
* — Since the samples from a DP are similar to a parameter H_0 one-way to
test if a DP is generating your dataset is to check if the distributions(of
different attributes/dimensions) you get are similar to each other. Something like
permutation test, but understand the assumptions and caveats.

  • — The code for dirichlet sampling can be written as:
    <code>
    import matplotlib.pyplot as plt
    from scipy.stats import beta, norm

def dirichlet_sample_approximation(base_measure, alpha, tol=0.01):
betas = []
pis = []
betas.append(beta(1, alpha).rvs())
pis.append(betas[0])
while sum(pis) < (1.-tol):
s = np.sum([np.log(1 – b) for b in betas])
new_beta = beta(1, alpha).rvs()
betas.append(new_beta)
pis.append(new_beta * np.exp(s))
pis = np.array(pis)
thetas = np.array([base_measure() for _ in pis])
return pis, thetas

def plot_normal_dp_approximation(alpha):
plt.figure()
plt.title(“Dirichlet Process Sample with N(0,1) Base Measure”)
plt.suptitle(“alpha: %s” % alpha)
pis, thetas = dirichlet_sample_approximation(lambda: norm().rvs(), alpha)
pis = pis * (norm.pdf(0) / pis.max())
plt.vlines(thetas, 0, pis, )
X = np.linspace(-4,4,100)
plt.plot(X, norm.pdf(X))

plot_normal_dp_approximation(.1)
plot_normal_dp_approximation(1)
plot_normal_dp_approximation(10)
plot_normal_dp_approximation(1000)
</code>

  • — The code for Dirichlet process can be written as :
    <code>
    from numpy.random import choice

class DirichletProcessSample():
def init(self, base_measure, alpha):
self.base_measure = base_measure
self.alpha = alpha

self.cache = []
self.weights = []
self.total_stick_used = 0.

def call(self):
remaining = 1.0 – self.total_stick_used
i = DirichletProcessSample.roll_die(self.weights + [remaining])
if i is not None and i < len(self.weights) :
return self.cache[i]
else:
stick_piece = beta(1, self.alpha).rvs() * remaining
self.total_stick_used += stick_piece
self.weights.append(stick_piece)
new_value = self.base_measure()
self.cache.append(new_value)
return new_value

@staticmethod
def roll_die(weights):
if weights:
return choice(range(len(weights)), p=weights)
else:
return None
</code>

Statistics — Tests of independence

Tests of independence:

Basic principle is the same as ${\chi}^2$ – goodness of fit test
* Between categorical variables

${\chi}^2$-square tests:

The standard approach is to compute expected counts, and find the
distribution of sum of square of difference between expected counts and ordinary
counts(normalized).
* Between Numerical Variables

${\chi}^2$-square test:

  • Between a categorical and numerical variable?

Null Hypothesis:

  • The two variables are independent.
  • Always a right-tail test
  • Test statistic/measure has a ${\chi}^2$ distribution, if assumptions are met:
  • Data are obtained from a random sample
  • Expected frequency of each category must be
    atleast 5
  • ### Properties of the test:
  • The data are the observed frequencies.
  • The data is arranged into a contingency table.
  • The degrees of freedom are the degrees of freedom for the row variable times the degrees of freedom for the column variable. It is not one less than the sample size, it is the product of the two degrees of freedom.
  • It is always a right tail test.
  • It has a chi-square distribution.
  • The expected value is computed by taking the row total times the column total and dividing by the grand total
  • The value of the test statistic doesn’t change if the order of the rows or columns are switched.
  • The value of the test statistic doesn’t change if the rows and columns are interchanged (transpose of the matrix

The mystery of short term past performance versus future equity fund returns

The Eighty Twenty Investor

In our earlier posts, here and here, we found to our dismay that, our natural inclination to choose the top mutual fund performers of the past 1 & 3 years hasn’t worked too well.

That leaves us with the obvious question..

What actually goes wrong when we pick the top funds of the past few years?

 The rotating sector winners..

Below is a representation of the best performing sectors year over year. What do you notice?

Sector wise calendar year performance.png

The sector performance over each and every year varies significantly and the top and bottom sectors keep changing dramatically almost every year.

Sample this:

  • 2007 – Metals was the top performer with a whopping 121% annual return
  • 2008 – Metals was the bottom performer with a negative 74% returns & FMCG was the top perfomer (-21%)
  • 2009 – The tables turned! FMCG was the bottom performer (47%) while Metals was the…

View original post 1,399 more words

Share: Harry Potter and the Methods of Rationality

Is there some amazing rational thing you do when your mind’s running in all different directions?” she managed.
“My own approach is usually to identify the different desires, give them names, conceive of them as separate individuals, and let them argue it out inside my head. So far the main persistent ones are my Hufflepuff, Ravenclaw, Gryffindor, and Slytherin sides, my Inner Critic, and my simulated copies of you, Neville, Draco, Professor McGonagall, Professor Flitwick, Professor Quirrell, Dad, Mum, Richard Feynman, and Douglas Hofstadter.”
Hermione considered trying this before her Common Sense warned that it might be a dangerous sort of thing to pretend. “There’s a copy of me inside your head?”
“Of course there is!” Harry said. The boy suddenly looked a bit more vulnerable. “You mean there isn’t a copy of me living in your head?”
There was, she realized; and not only that, it talked in Harry’s exact voice.
“It’s rather unnerving now that I think about it,” said Hermione. “I do have a copy of you living in my head. It’s talking to me right now using your voice, arguing how this is perfectly normal.”
“Good,” Harry said seriously. “I mean, I don’t see how people could be friends without that.”
She continued reading her book, then, Harry seeming content to watch the pages over her shoulder.
She’d gotten all the way to number seventy, Katherine Scott, who’d apparently invented a way to turn small animals into lemon tarts, when she finally worked up the courage to speak.

Regularization

Based on a small post found here.

One of the standard problems in ML with meta modelling algorithms(Algorithms that run multiple statistical models over given data and identifies the best fitting model. For ex: randomforest  or the rarely practical genetic algorithm) ,  is that they might favour overly complex models that over fit the given training data but on live/test data perform poorly.

The way these meta modelling algorithms work is they have a objective function(usually the RMS of error of the stats/sub model with the data)  they pick the model based on.(i.e: whichever model yields lowest value of the objective function).  So we can just add a complexity penalty(one obvious idea is the rank of the polynomial that model uses to fit, but how does that work for comparing with exponential functions?)  and the objective function suddenly becomes RMS(Error) + Complexity_penalty(model).

 

Now depending on the right choice of Error function and Complexity penalty this can find models that may perform less than more complex models on the training data, but can perform better in the live model scenario.

The idea of complexity penalty itself is not new, I don’t dare say ML borrowed it from scientific experimentation methods or some thing but the idea that the more complex a theory or a model it should be penalized over a simpler theory or model is very old. Here’s a better written post on it.

Related Post: https://softwaremechanic.wordpress.com/2016/08/12/bayesians-vs-frequentistsaka-sampling-theorists/

 

Sleeper Theorems

This inspired me to compile a list:
Since, I’m not a mathematician(pure/applied) I just compiled things from the blog post combining
with the comments:
* Bayes Theorem P(A|B) = P(B|A) *P(A)/P(B)

  • Jensen’s Inequality \psi(E(X)) <= E(\psi(X)) if \psi is a convex function and X is
    a random variable. Extends convexity from sums to integrals(aka discrete to continuous)
  • lto’s lemma: aka(Merton, Black and Scholes option pricing formula)
  • Complex analysis.. should i disqualify this as not a theorem??
  • Standard error of the mean.details link
  • Jordan Curve Theorem: A closed curve has an inside and an outside. (sounds obvious in 2D
    and 3D, perhaps with time as 4D, keeping options open is staying outside closed curves??)
  • kullback-leibler positivity:(no clue need to look up wolfram alpha or wikipedia)
  • Hahn-Banach Theorem (again needs searching)
  • Pigeon-Hole principle link here
  • Taylor’s theorem, (once again continuous function approximated by sum of discrete
    components/expressions) Used in:
  • Approximating any function with nth degree precision
  • Bounding the error term of an approximation
  • Decomposing functions into linear combinations of other functions
  • Kolmogorov’s Inequality for the maximum absolute value of the partial sums of a sequence of IID random variables.( the basis of martingale theory)
  • Karush-Kuhn-Tucker optimality conditions for nonlinear programming, link
    here
  • Envelope Theorem — from economics
  • Zorn’s lemma , also Axiom of Choice
  • Fourier Transform and Fast Fourier Transform

Elbow Method

What is elbow method?:
Elbow method So elbowing is this mechanism of
social reiforcement/communication about something that is generally considered bad to say
aloud or is too subtle to try to find words for.

Okay, just kidding, while that’s kinda true, I was just pranking on y’all. What I want to
talk about is a stats/math/Machine Learning method used when trying to find clusters in a
given dataset. So [Elbow Method] (https://en.wikipedia.org/wiki/Elbow_method_(clustering))
is basically a measure/method for interpretation and validation of conistency of a cluster.
Ugh.. the original sentence in Wikipedia is so long with all 10-letter words, I couldn’t
even type it again.(Above attempt was simplified during typing-on-the-fly)

The basic issue is that, during a cluster analysis we need to settle on a few things:
* A measure for distance within, across and between clusters and points in the
clusters

  • A method/algorithm for updating, re-assigning the points to clusters.
  • Optional: A formula for guessing the number of algorithms. In most cases this is
    optional, and parameterized.

In the case of elbow method it is a visual method for the third option. Basically, it’s a
ratio of variance (within clusters) divided by overall variance. So it explains how much(or
what %)of
the total variance is explained by choosing “n” number of clusters.

The name elbow method comes from visually plotting the number of clusters Vs the ratio(% of
variance explained) and finding that point where there’s an acute bend(if no.of.clusters is
in X-axis), picking the number of clusters at that point.