“Is it?  It really doesn’t feel like that’s the case, right now,” I answered.  “Home’s supposed to be a place I feel safe and secure.”

Advertisements

I’m not giving up!”  I raised my voice, angry, surprised at myself for being angry.  I took a breath, forced myself to return to a normal volume, “I’m saying there’s probably no fucking way I’ll understand why she did what she did.  So why waste my time and energy dwelling on it?  Fuck her, she doesn’t deserve the amount of attention I’ve been paying her. I’m… reprioritizing.”

She’s a bully,” I said.  “At the end of the day, she only wants to fight opponents she knows she can beat.”

“I’ve fought two Endbringers,” Shadow Stalker said, stabbing a finger in my direction.  “I know what you’re trying to do.  Fucking manipulating me, getting me into a dangerous situation where you’ll get me killed.  Fuck you.”

Gaussian Mixture Models.. GMMs

Gaussian Mixture Models

  • A probabilistic model
  • Assumes all data points are generated from a mixture of finite no. of gaussian
    distributions
  • The parameters of the gaussian distributions are unknown.
  • It is a way of generalizing k-mean(or k-medoid or k-mode for that matter) clustering to use the
    co-variance structure/stats as well as the mean/central-tendency measures of latent
    gaussians.

scikit-learn

Pros:

  • Fastest for learning mixture models
  • No bias of means towards zero, or bias cluster sizes to have specific structures

Cons:

  • When there’s not enough points per mixture, estimating covariance matrices becomes
    difficult
  • Number of components; will always use all the components it has access to, so might need
    missing or test-reserved data..

  • No. of components can be chosen based on BIC criterion.

  • Variational Bayesian Gaussian mixture avoids having to specify number of components

Variational Bayesian Gaussian Mixture

Fitting a Gaussian model to data

Share:Worm

A similar problem on a smaller scale.  I can walk through minutes, I could have walked back to save them, but I let them die because it meant a monster would remain gone.  What merit is a gamble, a sacrifice, if you stake things that matter nothing to you?

Share: Worm

have been doing this for ten years.  I admire you for retaining your…” he trailed off.

“Idealism?”

“Not a word I’m familiar with, Weaver.  Faith?”

“Faith works.”

“I have none left, after ten years.  No faith.  We are a wretched, petty species, and we have been given power to destroy ourselves with

Factor Analysis — notes

Factor Analysis:

Multiple Classifications:

Aka [Dimensionality reduction](https://en.wikipedia.org/wiki/Dimensionality_reduction)
Aka
[Dimensionality Estimation] (http://disco.ethz.ch/lectures/fs11/seminar/paper/samuel-1.pdf)
###  Methods:
    * [Intrinsic Dimension Estimation](https://www.stat.berkeley.edu/~bickel/mldim.pdf)
      or (http://www.sciencedirect.com/science/article/pii/S0020025515006179)
    * [PCA](http://www.music.mcgill.ca/~ich/classes/mumt611_07/classifiers/lda_theory.pdf)
    * [Kernel-PCA](http://papers.nips.cc/paper/1491-kernel-pca-and-de-noising-in-feature-spaces.pdf)
    * [Graph-based kernel PCA](http://ieeexplore.ieee.org/abstract/document/1261097/)
    * [Linear Discriminant Analysis](http://www.music.mcgill.ca/~ich/classes/mumt611_07/classifiers/lda_theory.pdfhttp://www.music.mcgill.ca/~ich/classes/mumt611_07/classifiers/lda_theory.pdf)
    * [Generalized Discriminant Analysis](http://www.jmlr.org/papers/v6/ye05a.html)
    * [Manifold Learning] (http://scikit-learn.org/stable/modules/manifold.html)
### Factor Analysis(based on goal):

    *  [Exploratory Factor
       Analysis](https://en.wikipedia.org/wiki/Exploratory_factor_analysis):
        #### Fitting Procedures:
            * used to estimate factor loadings and unique variances


    * [Confirmatory Factor
      Analysis](https://en.wikipedia.org/wiki/Confirmatory_factor_analysis):

### Types of factoring:
    * [Principal Component
      Analysis](https://en.wikipedia.org/wiki/Principal_component_analysis):

    * Canonical Factor Analysis: aka Rao's canonical factoring, uses principal axis
      method, unaffected by arbitrary rescaling, highest canonical correlation measure.

    * Common Factor Analysis: aka principal factor analysis, least no. of variables
      accounting for the common variance of a set of variables.

    * Image Factoring: based on correlation matrix of predicted variables, where each
      prediction is done via [multiple
      regression](https://en.wikipedia.org/wiki/Multiple_regression)

    * Alpha Factoring: based on maximizing reliability of factors, assumes random
      sampling of variables from universe of vars, (other methods assume fixed
      variables)

    * Factor Regression Model: Combinatorial model of factor and regression models,
        aka hybrid factor model with partially known factors

### Terminology:
    * Factor Loadings:
    * Interpreting Factor loadings:
    * Communality:
    * Spurious Solutions:
    * Uniqueness of Variable:
    * EigenValues/Characteristic Roots:
    * Extraction Sums of squared loadings:
    * Factor Scores:

### Criteria for number of Factors:
    * Horn's Parallel Analysis:
    * Velicer's MAP test:
     older methods
    * Kaiser Criterion:
    * Scree plot:
    * Variance explained criteria:

### Rotation Methods:
    * Varimax Rotation:
    * Quartimax Rotation:
    * Equimax Rotation:
    * Direct oblimin Rotation:
    * Promax Rotation:

Why you need to improve your training data, and how to do it

Pete Warden's blog

sleep_lostPhoto by Lisha Li

Andrej Karpathy showed this slide as part of his talk at Train AI and I loved it! It captures the difference between deep learning research and production perfectly. Academic papers are almost entirely focused on new and improved models, with datasets usually chosen from a small set of public archives. Everyone I know who uses deep learning as part of an actual application spends most of their time worrying about the training data instead.

There are lots of good reasons why researchers are so fixated on model architectures, but it does mean that there are very few resources available to guide people who are focused on deploying machine learning in production. To address that, my talk at the conference was on “the unreasonable effectiveness of training data”, and I want to expand on that a bit in this blog post, explaining why data is so important…

View original post 3,917 more words

The Bubble Under the Mathematical Rug

Math with Bad Drawings

Don’t freak out, but we’re surrounded by normal distributions.

They’re in our heights; our weights; our sampling means; our fever-dreams; our Galton Boards…

Image (3)

Every normal is a variation on the same bell-curved theme. Just specify two parameters—the mean, i.e., the center of the distribution, and the variance, which measures its breadth—and you’ve got a normal distribution. They’re one big clan, with a strong family resemblance.

But—for me, at least—this raises a question: Who is the matriarch of the family? Which normal distribution is the founding member, the Mitochondrial Eve, the universal common ancestor?

View original post 825 more words