Just at this time I left Caen, where I was then living, to go on a geological excursion under the auspices of the school of mines. The changes of travel made me forget my mathematical work. Having reached Coutances, we entered an omnibus to go some place or other. At the moment when I put my foot on the step the idea came to me, without anything in my former thoughts seeming to have paved the way for it, that the transformations I had used to define the Fuchsian functions were identical with those of non-Euclidean geometry. I did not verify the idea; I should not have had time, as, upon taking my seat in the omnibus, I went on with a conversation already commenced, but I felt a perfect certainty. On my return to Caen, for conscience’ sake I verified the result at my leisure.
-Henri Poincaré, Science and…
View original post 2,107 more words
(NOTE: This is perhaps the dumbest post I’ve ever done. I couldn’t be prouder.)
Math 40: Trying to Visualize a Fourth Dimension. Syllabus includes Flatland, the Wikipedia page for “hypercube,” long hours of squinting, and self-inflicted head injuries.
Math 99: An Irritating Introduction to Proof. The term begins with five weeks of the professor responding to every question with, “But how do you knoooooooow?” If anyone is still enrolled at that point, we’ll have to wing it, since no one has ever lasted that long.
Math 101: Binary. An introductory study of the binary numeral system. Also listed as Math 5.
View original post 278 more words
David Richeson: Division by Zero
While surfing the web the other day I read an article in which the author refers to a “topological map.” I think it is safe to say that he meant to write “topographic map.” This is an error I’ve seen many times before.
A topographic map is a map of a region that shows changes in elevation, usually with contour lines indicating different fixed elevations. This is a map that you would take on a hike.
A topological map is a continuous function between two topological spaces—not the same thing as a topographic map at all!
I thought for sure that there was no cartographic meaning for topological map. It turns out, however, that there is.
A topological map is a map that is only concerned with relative locations of features on the map, not on exact locations. A famous example is the graph that we use to…
View original post 95 more words
Here is an argument I used to make, but now disagree with:
Just to add another perspective, I find many “performance” problems in
the real world can often be attributed to factors other than the raw
speed of the CPython interpreter. Yes, I’d love it if the interpreter
were faster, but in my experience a lot of other things dominate. At
least they do provide low hanging fruit to attack first.[…]
But there’s something else that’s very important to consider, which
rarely comes up in these discussions, and that’s the developer’s
productivity and programming experience.[…]This is often undervalued, but shouldn’t be! Moore’s Law doesn’t apply
to humans, and you can’t effectively or cost efficiently scale up by
throwing more bodies at a project. Python is one of the best languages
(and ecosystems!) that make the development experience fun, high
quality, and very efficient.
(from Barry Warsaw)
I…
View original post 705 more words
In early 1970s, when the use of seat belts were made mandatory in the US to improve driver safety, something strange happened.
Instead of road accident deaths coming down they actually went up!
While the regulators were perplexed by this phenomenon, an economist by the name Sam Peltzman came up with a controversial answer.
He argued that though the drivers had lower risks due the additional safety that a seat belt provides, many drivers actually compensated for the additional safety by driving more recklessly (driving faster, not paying as much attention, etc.) under the comfort of the added safety.
“The safer they make the cars, the more risks the driver is willing to take”
This meant that bystanders – pedestrians, bicyclists etc – would receive no safety benefit from the seat belts but would rather suffer as a result of increased recklessness.
He termed this…
View original post 1,643 more words
First Model building time-split:
1.Descriptive analysis on the Data – 50% time
2.Data treatment (Missing value and outlier fixing) – 40% time
3.Data Modelling – 4% time
4.Estimation of performance – 6% time
Data Exploration steps:
Source Reference: https://www.analyticsvidhya.com/blog/2016/01/guide-data-exploration/
Below are the steps involved to understand, clean and prepare your data for building your predictive model:
1.Variable Identification
2.Univariate Analysis
3.Bi-variate Analysis
4.Missing values treatment
5.Outlier treatment
6.Variable transformation
7.Variable creation
Missing Value Treatment:
1.Deletion:
2.Mean/ Mode/ Median Imputation
3.Prediction Model:
4.KNN Imputation:
Outlier Treatment:
1.Data Entry Errors:
2. Measurement Error:
3. Experimental Error:
4. Intentional Outlier:
5. Data Processing Error:
6. Sampling error:
7. Natural Outlier:
]]>A couple of weeks ago, I mentioned I had some concerns about the ChestXray14 dataset. I said I would come back when I had more info, and since then I have been digging into the data. I’ve talked with Dr Summers via email a few times as well. Unfortunately, this exploration has only increased my concerns about the dataset.
View original post 5,094 more words
of n-grams(which is basically a markov chain model, with (n-1)-order)
* — It is a n-1 order markov model
* — Used in Protein sequencing, DNA Sequencing, Computational
linguistics(character and word)
* — Models sequences, using the statistical properties of n-grams
* — predicts based on .
* — in language modeling independence assumptions are made so that each
word depends only on n-1 previous words.(or characters in case of
character level modeling)
* — The probability of a word conditional on previous n-1 words follows a
Categorical Distribution
* — In practice, the probability distributions are smoothed by assigning non-zero probabilities to unseen words or n-grams.
— Use pseudocounts for unseen n-grams.(generally motivated by
bayesian reasoning on the sub n-grams, for n < original n)
— Skip grams also allow the possibility of skipping. So a 1-skip bi(2-)gram would create bigrams while skipping the second word in a three sequence.
Two ways of training: a, CBOW(Continuous-Bag-Of-Words) model predicts target words, given
a group of words, b, skip-gram is ulta. aka predicts group of words from a given word.
Trained using the Maximum Likelihood model