Today’s factismal: We’ve been using genetically-modified organisms to save lives for 38 years. If you keep up with the news, you are aware that there is a lot of arguing going on over the use of genetically-modified organisms, also known as GMOs to the acronym-lovers out there. On the one hand, there are those who […]
I shall dispense this advice now.
1. Treat your early employees more like partners than wage slave.*
2. This follows from the previous one. After every hire(and fire) re-consider your selection process.
3. Remember the Charlie Munger advice on trust here (or quoted below)**.
4. The best problem solvers, prefer to focus on solving the problem(s) and go right on to the next problem. They would much rather leave the performance reviews, raises (promised at the time of joining) etc.. to others. So if you do promise any review and raise based on that, follow through, Don’t delay with “we’ll do this in a formal setting in two weeks” dodge and then fail to follow through. You won’t build the best possible team with that approach.
5. Find the product/market fit..(Meh. I’m not qualified to say much about this without hands-on finding one).
5. Build a monopoly niche. Don’t compete on price, use your skills and knowledge to build a big manic monopoly, that would be the biggest barrier of entry to any competitors.
By the way the last two are just me re-gurgitating what I think makes sense from what I have read around. Only currently experimenting with implementing them.
** – “The highest form that civilization can reach is a seamless web of deserved trust — not much procedure, just totally reliable people correctly trusting one another. … In your own life what you want is a seamless web of deserved trust. And if your proposed marriage contract has forty-seven pages, I suggest you not enter.”
Source: Wesco Financial annual meeting, 2008 (quoted in Stanford Business School paper)
* — Note how I didn’t say anything about politeness or good salary or on time salary etc. That’s because all of those can be wrong ones to emphasize. My whole reason for this point is that they should have skin-in-the-game. Everything else can be worked around. Just don’t do this.
This is a long standing debate/argument and like most polarized arguments, both sides have some valid and good reasons for their stand. (There goes the punchline/ TLDR). I’ll try to go a few levels deeper and try to explain the reasons why I think this is kind of a fake argument. (Disclaimer: am just a math enthusiast, and a (willing-to-speculate) novice. Do your own research, if this post helps as a starting point for that, I’d have done my job.)
- As EY writes in this post about how bayes theorem is a law that’s more general and should be observed over whatever frequentist tools we have developed?
- If you read the original post carefully, he doesn’t mention the original/underlying distribution, guesses about it or confidence interval(see calibration game)
- He points to a chapter(in the addendum) here.
- Most of the post otherwise is about using tools vs using a general theory and how the general theory is more powerful and saves a lot of time
- My first reaction to the post was but obviously there’s a reason those two cases should be treated different. They both have the same number of samples, but different ways of taking the samples. One sampling method(one who does sample till 60% success) is a biased way of gathering data .
- As a different blog and some comments point out, if we’re dealing with robots(deterministic algorithmic data-collector) that precisely take data in a rigourous deterministic algorithmic manner the bayesian priors are the same.
- However in real life, it’s going to be humans, who’ll have a lot more decisions to make about considering a data point or not. (Like for example, what stage of the patient should be when he’s considered a candidate for the experimental drug)
- The point I however am going to make or am interested in making is related to known-unknowns vs unknown-unknowns debate.
- My point being even if you have a robot collecting the data, if the underlying nature of the distribution is unknown-unknown(or for that matter depends on a unknown-unknown factor, say location, as some diseases are more widespread in some areas) mean that they can gather same results, even if they were seeing different local distributions.
- A contiguous point is that determining the right sample size is a harder problem in a lot of cases to be confident about the representativeness of the sample.
- To be fair, EY is not ignorant of this problem described above. He even refers to it a bit in his 0 and 1 are not probabilities post here. So the original post might have over-simplified for the sake of rhetoric or simply because he hadn’t read The Red queen.
- The Red queen details about a bunch of evolutionary theories eventually arguing that the constant race between parasite and host immune system is why we have sex as a reproductive mechanism and we have two genders/sexes.
- The medicine/biology example is a lot more complex system than it seems so this mistake is easier to make.
- Yes in all of the cases above, the bayesian method (which is simpler to use and understand) will work, if the factors(priors) are known before doing the analysis.
- But my point is that we don’t know all the factors(priors) and may not even be able to list all of them, let alone screen, and find the prior probability of each of them.
P.S: Here’s a funny Chuck Norris style facts about Eliezer Yudkowsky.(Which I happened upon when trying to find the original post and was not aware of before composing the post in my head.) And here’s an xkcd comic about frequentists vs bayesians.
UPDATE-1(5-6 hrs after original post conception): I realized my disclaimer doesn’t really inform the bayesian prior to judge my post. So here’s my history/whatever with statistics. I’ve had trouble understanding the logic/reasoning/proof behind standard (frequentist?) statistical tests, and was never a fan of rote just doing the steps. So am still trying to understand the logic behind those tests, but today if I were to bet I’d rather bet on results from the bayesian method than from any conventional methods**.
UPDATE-2(5-6 hrs after original post conception): A good example might be the counter example. i.e: given the same data(aka in this case frequency of a distribution, nothing else really, i.e: mean, variance, kurtosis or skewness) show that bayesian method gives different results based on how it(data) was collected and frequentist doesn’t. I’m not sure it’s possible though given the number of methods frequentist/standard methods use.
UPDATE-3 (a few weeks after original writing): Here’s another post about the difference in approaches between the two.
UPDATE-4 (A month or so after): I came across this post with mentions more than two buckets, but obviously they are not all disjoint sets(buckets).
UPDATE-5(Further a couple of months after): There’s a slightly different approach to splitting the two cultures from a different perspective here.
** — I might tweak the amount I’d bet based on the results from it .
In fact, any time anybody offers you anything with a big commission and a 200-page prospectus, don’t buy it. Occasionally, you’ll be wrong if you adopt “Munger’s Rule”. However, over a lifetime, you’ll be a long way ahead—and you will miss a lot of unhappy experiences that might otherwise reduce your love for your fellow man.
That’s such an obvious concept—that there are all kinds of wonderful new inventions that give you nothing as owners except the opportunity to spend a lot more money in a business that’s still going to be lousy. The money still won’t come to you. All of the advantages from great improvements are going to flow through to the customers.
The great lesson in microeconomics is to discriminate between when technology is going to help you and when it’s going to kill you. And most people do not get this straight in their heads. But a fellow like Buffett does.
For example, when we were in the textile business, which is a terrible commodity business, we were making low-end textiles—which are a real commodity product. And one day, the people came to Warren and said, “They’ve invented a new loom that we think will do twice as much work as our old ones
Please read the Disclaimers at the end of the post first, if you’re easily offended.
- Get extremely unbeatable at 20 Questions(rationality link). It’ll help you make your initial diagnoses(ones based on questions about symptoms) faster and more accurate.
- Understand probability, bayes theorem and how to apply it** This will help you interpret the test results, you ordered based on the 20 questions.
- Understand base rate fallacy, and how to avoid being over confident.
- Understand the upsides and downsides of the drugs you prescribe. Know the probabilities of fatal and adverse side-effects and update them with evidence(Bayes’ theorem mentioned above) as you try out different brands and combinations.
- Know the costs and benefits of any treatment and help the patient make a good decision based on the cost-benefit analysis of treatment combined with the probabilities of outcome.
- Ask and Keep a history of medical records and allergies of the patient and till their grand parents.<sup>*</sup>
- Be willing and able to judge, when a patient is better off with a specialist. Try to keep in touch with Doctors nearby and hopeful all types of specialists.
- Explain the treatment options and pros and cons in easy language to the patients. It’ll reduce misunderstandings and eventually dis-satisfaction with the treatment.
- Resist the urge to treat patients as NPCs. Involve them in the treatment process.
- Find a hobby, that you can keep improving on till the end of life.
- Be aware of the conflict of interest between the patient and the pharmaceutical companies.
- Have enough research skills to form opinions on base rates/probabilities in different diseases and treatment methods as needed.
Basically the same skill sets as above. One difference is in the skill level and you should customize that as needed.
- For ex: You would need to be able to explain the treatment options and the probabilistic nature of the outcomes to your patients.
- As for research, keep a track of progress in your area in treatment methods and different outcomes on the “quality of life” for the patients after the treatment.
- Better applied Bayesian skills. In the sense of figuring out independent variables and their probabilities affecting the outcome.
Some controversial ideas(Better use your common-sense before trying out):
- Experiment a little with your bio-chemistry and see how they affect your thought-processes. To be safe, stick to biologically produced ones. For ex: injecting self with a small adrenalin dose and monitoring bodily response can help keep your thinking clear in emergency situations.
- Know your self biology better. For ex: male vs female differences mean the adrenalin response is different and peaks later in females. If you think that’s wrong, please go back and check your course work. Also watch this 2 hour video and come back with objections after reading the studies he quotes.
- Keep regularly(whatever frequency your practice and nature of work demands) checking your(for ex; hormone levels) blood states, so that you can start regulating your self for optimal decision-making skills.
- If you’re a woman, you’ll customize practice on some of the skill sets above differently. For ex: Mastery over emotions might need more practice, while empathizing/connecting with the patient might be easier.
- Most of what follows is based on my experiences(either as a patient myself or a concerned relative) with Indian Doctors. Some of it may be trivial, to others, but most of it is skills a doc will need and ignored in school.
- I’ve split it in two (specialists and generalists) but there’s a fair amount of overlap.
- These are fairly high standards, but worth shooting for and I’ve kept the focus on smart rather than hard work.
- I’ve stayed from a few topics like: bedside manners/social skills, specific medical treatments and conditions(obviously, I’m not a Doctor after all) and a few others, you can add/delete(also specify/pick levels) as you see fit.
- Pick the skill-levels as demanded by your client population and adjust.
- I’m assuming generalists, don’t have to deal with emergency cases, but in some parts, that’s not likely then pick common emergency areas and follow specialist advice.
- I wrote this based on my experiences and with humans in mind, but veterinary Doctors may find some useful too.
* — I understand this is difficult in Indian circumstances, but I’ve seen it being done manually(simply leaves of prescriptions organized alphabetically, link to dr.rathinavel) , so it’s possible and worth the effort unless, you practice in area of highly migratory population.(for example rural vs urban areas).
**– If you’re trying to compete on availability for consultation, you’ll need to be able to do this after being woken in the middle of the night.
- Child-birth or labour involves physical trauma, but it is expected to happen (for say 9 months)… that complicates things???? (It definitely breeds paranoia and worry-downward-spiral)
- However, the fact that it is expected and the fact that it is linked to species survival has facilitated a lot of research. I tend to think expected and predictable events get a bit over-researched and over-operationalized* in general. Is the fact that it is linked to reproduction causes too much(harmful) research or not?? Interesting question, but may not be answerable…
- The choice of word “labour” is interesting. It has marxist/communist connotations in most contexts, but in this, not really…. coincidence???
- The blanket ban of “no males in labour ward”** might be useful and valid, but it hardly is without exception and definitely allows for more follow-the-process decisions, encouraging a ignorance of context..
- Even after about 30 hours of partial sleep and adrenaline surge*** I could feel that jump in heart beat on looking at my daughter..
- P.S: every note above(except 5th) was written before the adrenaline surge.
* — to the point of over-engineering that risks violating the “do-no-evil”
***–which was another story, i’ll write after my lymbic system settles down(aka down-regulating-lymbic-system for that memory), suffice it to say, i became more convinced of gender-difference (aka sexist)
The 3 common measures of central tendency used in statistics are :
- 1. Mean
- 2. Median
- 3. Mode
Note: That all these three and the other measures do obey the basic rules of measure theory.
The point being what you choose to describe your central tendency is key and should be decided based on what you want to do with it. Or more precisely what exactly do you want to optimize your process/setup/workflow for, and based on that you’ll have to choose the right measure. If you read that post above you’ll understand that:
- 1. Mean — Mean is a good choice when you want to minimize the variance(aka, squared distance or second statistical moment about central tendency measure).. That’s to say your optimization function is dominated by a lot of square of distance(from central tendency measure) terms. Think of lowering mean squared error. and how it’s used in straight line fitting
- 2. Median — Median is more useful if your optimization function has distance terms but not squared ones. So this will in effect be the choice when you want to minimize the distance from central tendency.
- 3. Midrange — Midrange is useful when your function looks like max(distance from central measure)..
If most of that sounded too abstract then here’s a practical application I can think of right away to use. Imagine you’re doing performance testing and optimization of a small API you’ve built. Now I don’t want to go into what kind of API/technology behind it or anything. So let’s just assume you want to run it multiple times and calculate a measure of central tendency from it and then try to modify the code’s performance(with profiling + different libraries/data structures whatever….), so what measure of central tendency should you pick?
- Mean — Most Engineers would pick Mean and in a lot of cases it’s enough but think about it. It optimizes for variance of run/execution time. Which is important and useful to optimize in most cases, but in some cases may not be that important.
- Mode — An example is if your system is a small component of say a high-frequency trading platform and the consumer of it has a timeout and fails if it times out.(aka your api is mission-critical, it simply cannot fail). Then you want to make sure even in the lowest case your program completes. If the worst case runtime complexity is what you want to lower then you should pick mode. (Note this is still a trade-off over not lowering the average/mean use-case, just like hard-choice.)
- Median — This is very similar to Mean, except it doesn’t really care about variance. If you’re picking median, then your optimized program is sure to have the best performance in the average run/case/dataset
- Midrange — Well this is an interesting case. Think about it.. even in the previous timeout example i mentioned this could be useful. Here it goes,suppose your api is not mission-critical(i.e: if it fails the overall algorithm will just throw out that data term and progress with other data sources). when you want to maximize the number of times your program finishes within the timeout. i.e: you’re purely measuring the number of times you finish/return a value within the timeout period. You don’t care about the worst-case scenario.