Showing posts with label Epidemiology. Show all posts
Showing posts with label Epidemiology. Show all posts

Friday, October 24, 2014

Have you had your flu shot yet?

This is Joseph

I was reading Mike the Mad Biologist's web page and I noted this article on Ebola and the flu.  Ebola has been in the news a lot but influenza remains a bigger killer than Ebola:
Ebola has claimed fewer than 4,000 lives globally to date, none in the United States. Flu claims between 250,000 and 500,000 lives every year, including over 20,000 in the United States—far more American lives than Ebola will ever claim.
Notice just how terrible the Spanish flu was:
It infected 500 million people across the world, including remote Pacific islands and the Arctic, and killed 50 to 100 million of them—three to five percent of the world's population
Today, that would be a disease that killed between nine and fifteen million Americans.  And the easiest option to reduce risk is to vaccinate against the infection. 

Have you had your flu vaccine?  [I get mine annually, usually on the first day it is permitted]

Wednesday, March 19, 2014

Differential growth in life expectancy?

There has been a lot of discussion about Annie Lowrey's article on changes in life expectancy, documenting how most of the recent rise in life expectancy is among Americans of higher socio-economic status.  I did find the question of causality to be less compelling:

It is hard to prove causality with the available information. County-level data is the most detailed available, but it is not perfect. People move, and that is a confounding factor. McDowell’s population has dropped by more than half since the late 1970s, whereas Fairfax’s has roughly doubled. Perhaps more educated and healthier people have been relocating from places like McDowell to places like Fairfax. In that case, life expectancy would not have changed; how Americans arrange themselves geographically would have.

“These things are not nearly as clear as they seem, or as clear as epidemiologists seem to think,” said Angus Deaton, an economist at Princeton.
It is possible that there is a process of re-arrangement going on.  But that still doesn't make charts like the second one in this Aaron Carroll blog post easier to explain.  If the higher earning recipients of social security live longer than the lower earning recipients, then this association is not simple to explain with a direct appeal to the ecological fallacy. 

This is the sort of case where data is limited but we still need to make decisions.  It is odd that with some decisions we are desperately worried about getting things wrong when it advantages the affluent but we seem quite worried about over-interpreting data when redistribution would be the obvious policy solution. 

Friday, October 18, 2013

Medical Innovation is expensive

Megan McArdle has some strong opinions on how high drug prices have managed to help drive medical innovation:
Drug development is essentially a giant international collective-action problem. The U.S. has kept it from being a total disaster because we don’t have good centralized control of our insurance market, and our political system is pretty disorganized and easy to lobby. If that changes -- and maybe we just changed it! -- we’ll knock down the prices of drugs to near the marginal cost using government fiat, and I expect that innovation in this sector will grind to a halt. Stuff will still be coming out of academic labs, but no one is going to take those promising targets and turn them into actual drugs.


and
There are some promising alternatives. The main two that have been suggested are prizes and having the U.S. government get into the business of developing actual drugs, rather than just funding basic research. I’m in favor of trying both of these approaches. But so far, prizes have not proved themselves as ways to fund what is essentially commercial product development -- at least, not at the same level that patents do. Nor has the government. As we’ve just seen from the government’s attempt to develop a Travelocity-like site for health insurance, there are reasons to think that government might not be very good at that sort of thing. I don’t mean to slur the government -- governments absolutely have developed drugs in the past. But these are not the majority, and government processes often make it hard to do things that companies do easily.
Now I don't want to knock pharmaceutical companies.  A lot of good work is done by these companies.  On the other hand, medical costs in the United States are really, really high.  And the National Institutes of Health has proven itself as a really effective model of targeted and efficient medical research.  Now it is true that the cost of doing this would not be trivial.  Randomized controlled trials are extremely expensive and crucial to ensuring that only safe medications reach the open market. 

So this would be expensive.  But I am unclear as to whether it would be more expensive than the current model of drug development.  And these subsidies could go to many of the same companies that develop drugs now.  I think that this conversation would be much better informed with some actual calculations as to cost/benefit.  This is a bit outside of my expertise, but I see it as a key step to really advancing the conversation. 

Tuesday, September 3, 2013

Epidemiology example in tech

This post is a great example of epidemiological reasoning:

When people compare the stability of Linux and Windows, they may be biased a couple ways. First, Linux is more often deployed on servers and Windows more often on desktops. So they may unintentionally be comparing Linux servers to Windows desktops. Second, they may be thinking that Linux users’ computers are more stable than Windows users’ computers, which is probably true. Linux users typically know more about computers than Windows users. They are more able to avoid problems and more able to fix them when they occur.
You see central issues in epidemiology of trying to form comparable comparison groups and trying to disentangle environmental factors.  The higher knowledge base for the Linux users is directly comparable to the healthy user effect (more informed individuals are more likely to use preventative therapies).  The differences between server versus desktop is a great example of differences in the population.  If drinkers are also more likely to smoke, sleep less, and so forth then you may well see more problems unrelated to the exposure itself. 

Epidemiology is everywhere. 

Wednesday, August 14, 2013

Social epidemiology of who does not get vaccinated

This post on who is actually refusing vaccination for their children is interesting indeed.  Consider:
 As Seitz-Wald explains, the unvaccinated kids are clustered in some of the wealthiest schools and neighborhoods, particularly in California, where some extremely expensive private schools have vaccination compliance rates as low as 20 percent. Anti-vaccination sentiment has been stereotyped as a mindless lefty cause, but in reality, Republicans are slightly more likely to oppose vaccination than Democrats. The real correlation is between having a lot of money and class privilege and opposing vaccination.
 This puts the whole issue of selfish behavior in a completely different context.  Especially as failing to vaccinate an older child can result in the infection of younger children.  So people with the most resources are deliberating deciding not to support the social good of reducing the burden of infectious disease among children? 

The speculation about reasons is unclear, but the most grisly possibility is that it is a status symbol showing that a special class of people should not have to follow the rules.  There is very little public health justification for exempting people because they want to feel special and like the rules do not apply to them.  We did not create special person exemptions with the prohibition of dumping raw sewage on the streets or dropping your garbage into city parks, we should not do it here. 

Wednesday, July 31, 2013

General versus particular cases

Andrew Gelman did a very interesting article in Slate on how being overly reliant on statistical significance can lead to spurious findings.  The authors of the study that he was critiquing replied to his piece.  Andrew's thoughts on the response are here

The led to two thoughts.  One, I am completely unimpressed by claims that a paper being in a peer-reviewed journal -- that is a screen but even good test have false positives.  All this convinces me of is that the authors were thoughtful in the development of the article, not that they are immune to problems.  But this is true of all papers, including mine. 

Two, I think that this is a very tough area to take a single example from.  The reason is that any one paper could well have followed the highest possible level of rigor, as Jessica Tracy and Alec Beall claim they have done.  That doesn't necessarily mean that all studies in the class have followed these practices or that there were not filters that aided or impeded publication that might enhance the risk of a false positive.

For example, I have just finished publishing a paper where I had an unexpected finding that I wanted to replicate (that there was an association was a priori, the direction was reversed from the a priori hypothesis).  I found such a study, added additional authors, added additional analysis, rewrote the paper to be a careful combination of two different cohorts, and redid the discussion.  Guess what, the finding did not replicate.  So then I had  the special gift of publishing a null paper with a lot of authors and some potentially confusing associations.  If I had just given up at that point, the question might have been hanging around until somebody else found the same thing (I often used widely available data in my research) and published it. 

So I would be cautious about multiplying the p-values together for a probability of a false positive.  Jessica Tracy and Alec Beall:
The chance of obtaining the same significant effect across two independent consecutive studies is .0025 (Murayama, K., Pekrun, R., & Fiedler, K. (in press). Research practices that can prevent an inflation of false-positive rates. Personality and Social Psychology Review.)
I suspect that this would only hold if the testable hypothesis was clearly stated before either study was done.  It also presumes independence (it is not always obvious that this will hold as design elements of studies may influence each other) and that there isn't a confounding factor involved (that is causing both the exposure and the outcome).

Furthermore, I think as epidemiologists we need to make a decision about whether these studies are making strong causal claims or advancing a prospective association that may led to a better understanding of a disease state.  We often write articles speaking in the later mode but then lapse into the former when being quoted. 

So I guess I am writing a lot to say a couple of things in conclusion. 

One, it is very hard to pick a specific example of a general problem when it is possible that any one example might happen to meet the standards required for the depth of inference being made.  This is very hard to ascertain within the standards of the literature. 

Two, the decision of what to study and what to publish are also pretty important steps in the process.  These things can have a powerful influence on the direction of science in a very hard to detect manner. 

So I want to thank Andrew Gelman for starting this conversation and the authors of the paper in question for acting as an example in this tough dialogue. 



Wednesday, March 13, 2013

Epidemiology and Truth

This post by Thomas Lumley of Stats Chat is well worth reading and thinking carefully about.  In particular, when talking about a study of process meats and mortality he opines:

So, the claims in the results section are about observed differences in a particular data set, and presumably are true. The claim in the conclusion is that this ‘supports’ ‘an association’. If you interpret the conclusion as claiming there is definitive evidence of an effect of processed meat, you’re looking at the sort of claim that is claimed to be 90% wrong. Epidemiologists don’t interpret their literature this way, and since they are the audience they write for, their interpretation of what they mean should at least be considered seriously.


I think that support of an association has to be the most misunderstood piece of Epidemiology (and we epidemiologists are not innocent of this mistake ourselves).  The real issue is that cause is a very tricky animal.  It can be the case that complex disease states have a multitude of "causes".

Consider a very simple (and utterly  artificial) example.  Let assume (no real science went into this example) that hypertension (high systolic blood pressure) occurs when multiple exposures over-whelms a person's ability to compensate for the insult.  So if you have only one exposure off of the list then you are totally fine.  If you have 2 or more then you see elevated blood pressure.  Let's make the list simple: excessive salt intake, sedentary behavior, a high stress work environment, cigarette smoking, and obesity.  Now some of these factors may be correlated, which is its own special problem.

But imagine how hard this would be to disentangle, using either epidemiological methods or personal experimentation.  Imagine two people who work in a high stress job, one of which eats a lot of salt.  They both start a fitness program due to borderline hypertension.  One person sees the disease state vanish whereas the other sees little to no change.  How do you know what was the important factor?

It's easy to look at differences in the exercise program; if you torture the data enough it will confess.  At a population level, you would expect completely different results depending on how many of these factors the underlying population had.  You'd expect, in the long run, to come to some sort of conclusion but it is unlikely that you'd ever stumble across this underlying model using associational techniques. 

The argument continues:
So, how good is the evidence that 90% of epidemiology results interpreted this way are false? It depends. The argument is that most hypotheses about effects are wrong, and that the standard for associations used in epidemiology is not a terribly strong filter, so that most hypotheses that survive the filter are still wrong. That’s reasonably as far as it goes. It does depend on taking studies in isolation. In this example there are both previous epidemiological studies and biochemical evidence to suggest that fat, salt, smoke, and nitrates from meat curing might all be harmful. In other papers the background evidence can vary from strongly in favor to strongly against, and this needs to be taken into account.
 
This points out (correctly) the troubles in just determining an association between A and B.  It's ignoring all of the terrible possibilities -- like A is a marker for something else and not the cause at all.  Even a randomized trial will only tell you that A reduces B as an average causal effect in the source population under study.  It will not tell you why A reduced B.   We can make educated guesses, but we can also be quite wrong.

Finally, there is the whole question of estimation.  If we mean falsehood to be that the size of the average causal effect of intervention A on outcome B is completely unbiased then I submit that 90% is a very conservative estimate (given if you make truth an interval around the point estimate to the precision of the reported estimate given the oddly high number of decimal places people like to quote for fuzzy estimates). 

But that last point kind of falls into the "true but trivial" category . . .


Monday, March 11, 2013

Some epidemiology for a change

John Cook has an interesting point:
When you reject a data point as an outlier, you’re saying that the point is unlikely to occur again, despite the fact that you’ve already seen it. This puts you in the curious position of believing that some values you have not seen are more likely than one of the values you have in fact seen.
 
This is especially problematic in the case of rare but important outcomes and it can be very hard to decide what to do in these cases.  Imagine a randomized controlled trial for the effectiveness of a new medication for a rare disease (maybe something memory improvement in older adults).  One of the treated participants experiences sudden cardiac death whereas nobody in the placebo group does. 

One one hand, if the sudden cardiac death had occured in the placebo group, we would be extremely reluctant to advance this as evidence that the medication in question prevents death.  On the other hand, rare but serious drug adverse events both exist and can do a great deal of damage.  The true but trivial answer is "get more data points".  Obviously, if this is a feasible option it should be pursued. 

But these questions get really tricky when there is simply a dearth of data.  Under these circumstances, I do not think that any statistical approach (frequentist, Bayesian or other) is going to give consistently useful answers, as we don't know if the outlier is a mistake (a recording error, for example) or if it is the most important feature of the data.

It's not a fun problem. 

Tuesday, October 2, 2012

Health Insurance Question


Austin Frakt on John Goodman's proposals in Priceless
Anyway, the main rule John doesn’t like is community rating. He explains the problems with community rating, leading to a seeming take-down of risk adjustment. One problem with risk adjustment is that no methods predict costs all that well. Of course, some of health care, probably most of it, is unpredictable, the very part John thinks we should insure against.


John’s proposed solution to risk adjustment is that, upon switching plans, an individual’s “original health plan would pay the extra premium being charged by the new health plan, reflecting the deterioration in health condition.” There are two things about this I do not understand. First, how would this extra premium be calculated in a way that is different from risk adjustment payments? If we knew a better way, we’d have better risk adjustment now.*


Second, this idea seems no different than risk adjustment by another name. Think about it from the new plan’s point of view. Would the plan manager act any differently if the payment is called a “change of health status offset” and paid by the original insurer or a “risk adjustment payment” and paid via a market administrator of some sort (funded, for example, by assessments on low-risk bearing plans)? A dollar is a dollar. The same limitations of risk adjustment apply, don’t they?*
 
I see two issues here, both brought up in the comments.   The first is that there is a huge issue with information here.  Sorting out what the "lump sum payment" would be from the first plan to the second plan is a daunting task.

The other is the assumption that market players are immortal.  What happens if a company invests in high risk assets with their reserves?  Or if a company goes bankrupt?  How does the consumer get to be reimbursed for the increase in premium now that the original company has no assets? 

This is unlike a regular insurance company, because if a regular insurance company has to stop covering thousands of customers for fire, they do not incur instant liabilities.  Nor does the underlying risk of fire make it harder and harder to insure a house over time (or at least this doesn't change as briskly as health between 20 and 50).

The closest analogy is pension funds, but notice the huge problems we are having with defined benefit pension plans.  Notice how much discussion there is about breaking pension plan contracts due to bankruptcy; airline pilots seem to be the latest example.

Now consider the amount of personal risk such a system would create.  At 18 you buy insurance and then hope that it lasts until you are 65 (if we keep medicare) or perhaps 80 or 90 if we don't.  Even the 18 to 65 perod is 47 years.  How many top companies of 47 years ago are healthy today? 

So what is the solution to this risk and information problem?  Well, with pensions we have government backing.  That helps.  But at what point does regulating the market and creating an interaction system between insurers reduce efficiency to the point where competition isn't going to improve gains?  And recall, the real way to make money in this market is to be able to forecast risk (over 47 years) better than your competitors.  But if you underestimate risk and mis-price your plans, you can't reduce services or customers will leave and bankrupt you instantly.

Isn't this just begging for an endless cycle of bailouts?

Monday, September 17, 2012

Another reason observational epidemiology is hard

John D Cook:

And yet behind every complex set of rules is a paper showing that it outperforms simple rules, under conditions of its author’s choosing. That is, the person proposing the complex model picks the scenarios for comparison. Unfortunately, the world throws at us scenarios not of our choosing. Simpler methods may perform better when model assumptions are violated. And model assumptions are always violated, at least to some extent.
 
 One of the hardest things with simulation studies is that we get to develop our own set of assumptions.  So we actually know how to correctly model the phenomenon of interest. 

The problem is that we usually do not know how much error is introduced when the complex (and often non-linear) model fails.  On the other hand, it is amazing how far one can get with a clear set of rules of the thumb. 

I wonder if it would be better if we had a different person test the model than the one who proposed it? 

Tuesday, September 4, 2012

Why Observational Epidmeiology is frustrating

Andrew Gelman has a post up on the history of cigarette smoking research, based on a book he was reading a while back.  It's pretty interesting but what really caught my eye was this comment:

Vague statistical inference can not possibly establish such a causal link. Even valid associative inference should establish a 50-100% correlation between smoking and cancer, but it does not even come close. Most people who smoke don’t get lung cancer, and at least 10% of Americans who do get lung cancer- do not smoke. There are also huge international/ethnic variations among smokers and cancer rates. There is currently no proof whatsoever for the alleged smoking-cancer causal link. None. Smoking is a disgusting and silly habit. But all that one can now objectively say is that it is a risk factor for cancer and increases the incidence of lung cancer.
 
And this was not the only person in the comments who was casting doubt on this association.  As an epidemiologist, I want to scream.  If people will not believe this evidence then they really will not believe any level of evidence for observational epidemiology.  We have cohort studies going back 50 or more years (Richard Doll has one). Even better, the members of this cohort did not initially know that smoking was harmful (and I recall that the original hypothesis was automobile fumes and not smoking, although my memory may be failing me here).  So we don't even suspect a healthy abstainer effect. 

The requirement for a 50 to 100 correlation seems to ask for smoking to be directly causal of lung cancer instead of increasing the underlying risk of lung cancer.  Consider skiing and broken legs.  Not all broken legs are due to skiing and many people ski and do not break a leg.  But there is no question that skiing is a risk factor for broken legs.  Another good example is collapsed disks in the back.  If you are working with a veterans population, the first question you ask when you see a compression fracture in the spinal cord is "were you a paratrooper?".  Not all paratroopers have compression fractures and not all compression fractures are due to jumping out of airplanes, but it is a pretty direct link to increased risk. 

There is a libertarian line of defense here: people ski because they value the enjoyment of skiing more than the risk of a broken limb.  I am not always delighted by it, but it is at least an arguable position.  But directly denying the link between smoking and lung cancer seems to be setting a very aggressive standard of proof. 

Wednesday, August 22, 2012

Government Health Care

Aaron Caroll:
There are plenty of things that government does poorly. Or, at least you can make that argument, and find some support for it out there. For instance, many people believe that government does a terrible job at sparking innovation. I could imagine a debate there. Some think that government does a bad job at providing choice. That’s entirely defensible. Government run systems also allow less room for profit, which can drive out entrepreneurs. Also arguable.

But what government systems do well is hold down costs. They use central planning. They use their large market power to negotiate for reduced reimbursement (see Part 2). They buy drugs cheaper. They eliminate profit and overhead.
 
 In a lot of ways this still understates the role of government in health care.  The regulatory rules about health care are aslo responsible for increasing prices as well and have some definite effects on innovation.  Now I happen to think that some of the rules are good (e.g. the FDA) and some of these rules are bonkers (e.g. limits on number of new physicians via residency slots).  But there is a point where you have to decide how you want a market to be run.  Designing it so that it regulates things that help interest groups (i.e. keeping physician numbers down) but not other things (i.e. reducing costs by using market power) is very much the definition of regulatory capture.

I am quite willing to have a discussion about free market health care.  The first barrier to a real free market system is how we deal with public health.  After all, it took government intervention to get sewers and outhouses to come into common use in Europe.  Just look at the complexity of the modern sanitation system in France.  There is a conflict between the freedom to dump waste on the streets and the need to protect the public from fecal borne disease.  Similar arguments can be made with the need to try and keep antibiotics effective. 

A system that keeps all of the regulatory barriers to entry but shifts costs to the consumer is a very partial form of opening an industry to the free market. 

Monday, August 20, 2012

Levitt and publishing bias in medical journals

Via Andrew Gelman here is a quote from Steven Levitt
When I told my father that I was sending my work saying car seats are not that effective to medical journals, he laughed and said they would never publish it because of the result, no matter how well done the analysis was. (As is so often the case, he was right, and I eventually published it in an economics journal.)
Now compare his article to this one (published a year later):
OBJECTIVE: The objective of this study was to provide an updated estimate of the effectiveness of belt-positioning booster (BPB) seats compared with seat belts alone in reducing the risk for injury for children aged 4 to 8 years. METHODS: Data were collected from a longitudinal study of children who were involved in crashes in 16 states and the District of Columbia from December 1, 1998, to November 30, 2007, with data collected via insurance claims records and a validated telephone survey. The study sample included children who were aged 4 to 8 years, seated in the rear rows of the vehicle, and restrained by either a seat belt or a BPB seat. Multivariable logistic regression was used to determine the odds of injury for those in BPB seats versus those in seat belts. Effects of crash direction and booster seat type were also explored. RESULTS: Complete interview data were obtained on 7151 children in 6591 crashes representing an estimated 120646 children in 116503 crashes in the study population. The adjusted relative risk for injury to children in BPB seats compared with those in seat belts was 0.55. CONCLUSIONS: This study reconfirms previous reports that BPB seats reduce the risk for injury in children aged 4 through 8 years. On the basis of these analyses, parents, pediatricians, and health educators should continue to recommend as best practice the use of BPB seats once a child outgrows a harness-based child restraint until he or she is at least 8 years of age.
 So what is different?  Well, the complete interview data is a hint as to what could be happening differently.  It is very hard to publish a paper in medical journal using weaker data than that present elsewhere.  Even more interestingly, papers before this one found protective associations (this was 2006) which should also be concerning. 

Then we notice that the Doyle and Levitt has Elliott et al. as a reference, but still claim that they are the first to consider this issue:
This study provides the first analysis of the relative effectiveness of seat belts and child safety seats in preventing injury based on representative samples of police-reported crash data.
So now let us consider reasons that a medical journal may have had issues with this paper.  First, it does not seem to deal with the previous literature well.  Second, it doesn't explain why crash testing results do not seem to translate into actual reduction in events.  It might be due to misuse of the equipment, but it is not clear to me what the conclusion should be then. 

But it seems that jumping to the conclusion that the paper would not be published because of the conclusion seems to assume facts not in evidence.  It is common for people to jump fields and apply the tools that they have learned in their discipline (economics) and not necessarily think about the issues that obsess people in the field (public health).  Some times this can be a good thing and a new perspective can be a breath of fresh air.  But in a mature field it can also be the case that there is a good reason that the current researchers focus on the points that they do.

This reminds me of Emily Oster, another economist who wandered into public health and seemed surprised at the resistance than she encountered.

So is the explanation Levitt's father gave possible?  Yes.  But far more likely was the difficulty of jumping into a field with a high counter-intuitive claim and hoping for an immediate high impact publication.  Medical journals are used to seeing experiments (randomized controlled drug trials, for example) overturn otherwise compelling observational data.  So it isn't a mystery why the paper had trouble with reviewers and it does not require any conspiracy theories about public health researchers not being open to new ideas or to data. 

Cost is tricky in health care

An interesting Yglesias post:
Instead the existing Medicare Advantage program tries to apply a risk-adjustment formula to the patients, and Ryan proposes doing the same in his greatly expanded version of Medicare contracting-out. But this doesn't change the fact that the real profit-making opportunity here is to try to identify and exploit inevitable flaws in the risk-adjustment process. The winning strategy is to craft products that are appealing to customers the formula is willing to overpay for and unappealing to customers the formula would underpay for. Now that could be a small problem or a it could be a giant problem, all depending on how good the government is at setting the rates. Which is to say that for bringing private bidders into the process to work well, you need really effective central planning. And to the extent that you have effective central planning, it seems to me that it makes sense to take advantage of the economies of scale that come from a single-payer system.
I think that this understates just how tremendously difficult epidemiological risk modeling really is.   But I do not think it undermines the central point -- once you put the work into risk adjusting the payouts to private companies you have all of the machinery for a single payer approach.  And it is dead obvious why a naive approach won't work.  But even the modern risk models aren't that great accoridng to Peter Orszag:
In 2006, Medicare Advantage plans were overpaid by more than $3,000 per beneficiary because they were able to select beneficiaries who cost less than their risk-adjusted payments. About $1,000 of that overpayment reflects what the plans were paid, rather than what they bid. So relative to their bids, the plans were overpaid by $2,000 per beneficiary -- or roughly 25 percent of the bid, on average.
That is a huge profit making potential (just think of the return on investment for that statistical model).  So you focus the incentives of the private sector on finding weak spots in the model (because that is incredibly profitable) and not on reducing health care costs (because that is hard and painful). 

I am somewhat sympathetic to the "put lot's of resources in medicine and technological improvements will follow" types of arguments.  But it seems to me that this approach is going to focus the innovation in precisely the wrong spot. 

Friday, August 17, 2012

More Econ 101

Worth pondering
So, when somebody says that the government should buy paper from a private provider, hey, great. There are lots of buyers and sellers of paper. Go for it. If somebody wants to contract out janitorial work or food service, again, there are lots of buyers and sellers of those services. But I’ve never seen how contracting out more specifically governmental tasks really improves things. You go from having a monopoly provider, with all the disadvantages thereof, to a monopsony buyer, that still has to exert oversight (which is subject to all sorts of information problems and all sorts of good and bad incentives). And if there’s only one buyer, that buyer is, effectively, a monopoly provider for the public.
In all sorts of arenas, information is the real limiting factor.  I like to apply this to fields like health care.  There is no real open market in health care.  We have subsidized insurance or government provided insurance for the majority of customers.  The plans that exist often lock in networks that make it more challenging to comparison shop.  Most customers do not have the ability to shop around on price.  Treatments are legally protected from being sold on the open market (you can't self treat with a statin).  Medical doctors are a protected guild that has a limited number of residencies (and thus a cap on members) leading to increased costs due to shortages. 

None of this looks like a functional market with good information, equal quality goods, freedom of entry/exit, or substitution effects on treatment. 

Worth pondering. 

Saturday, August 11, 2012

Harm Reduction

DrugMonkey has a post on legalization efforts for marijuana in Washington state.  I think that this might be my public health background, but why is the focus on on harm reduction and not the legal status of the drug? I have no trouble believing that use of this substance may have adverse effects over the long term.  But so do legal substances like tobacco and alcohol.  Furthermore, how does the harm stack up to the harm done by a term in a prison?  A lifetime of reduced employment opportunities, acculturation in a brutal environment as well as being a victim of the violence that occurs in jails.

Why not look for middle grounds?  A heavily taxed substance that minors are prevented from buying (the tobacco model)?  Decriminalization so that use equals fines and not police breaking down one's door (the Canadian model)?

Why is the focus not on maximizing public health outcomes?

Friday, August 10, 2012

Yglesias on the Dangers of Observational Data

Matt Yglesias has a piece on the Dangers of Data that really should be the Dangers of Observational Data!  True randomized or quasi-randomized experiments, when you can do them, have none of these limitations ascribed to the thermostat problem (and, in physics, an experiment is how you would figure out what the thermostat actually does). 

I am also amazed by the different foci that fields put on different methodological issues.  In observational pharmacoepidemiology we are obsessed with the issue of confounding by indication and constantly worry that it is leading to non-trivial amounts of bias.  The concept behind confounding by indication is awfully similar to the problem described by Milton Friedman's thermostat.  But I never hear economists bring that up as a major issue with observational data; perhaps because they lack experiments to tell them how often an observational estimate is wildly inaccurate (whereas in pharmacoepidemiology these experiments are slow and rare rather than non-existent). 

None of this is to say that you cannot do valid inference with observational data -- you most definitely can.  But it does highlight the need to be very, very careful. 

Monday, August 6, 2012

When is a conclusion credible: health care edition

I think that critical thinking skills are needed when discussing health outcomes now more than ever.  Consider this prelude to reporting study results by Matt Yglesias:

Conservatives don't like Medicaid because they believe programs that tax the rich to transfer resources to the poor are bad for long-term economic growth and violate principles of cosmic justice. But since nobody likes to admit to the existence of a tradeoff, conservatives have lately taken to mounting the bizarre argument that giving health care to low-income poor people doesn't improve health outcomes.
 Now, let us consider just what this would likely imply.  If Medicaid, a conservative, inexpensive, and rationed health care system cannot improve outcomes what help is there for any possible level of health care?  After all, this care is focused on the sickest possible patients who have the fewest resources to handle a medical condition on their own.

So one would actually begin to wonder if any health care at all is effective at all.

Now it turns out that more careful studies are showing that Medicaid is a pretty cost effective way of saving lives.  But, if we believed the original (flawed) studies, wouldn't the really exciting take home point be that modern medicine is ineffective at saving lives?  That is a hige area of GDP that we could simply stop and re-allocate to more productive activities.

It really was an odd argument, all around. 

Friday, March 2, 2012

Added Sugars

From Aaron Carroll:
As we talk about how hard it is to combat obesity, it’s worth thinking about numbers like this once in a while. If we could get kids to give up half, not even all, of the added sugar in their diet, their overall calorie consumption would drop by 8%. They’d be dropping about 140-180 calories a day from their diet. And those calories are totally empty – they’re from added sugars they don’t need, and that won’t satiate them. When other research shows that reducing your caloric intake by 20 (yes, twenty) calories per day for three years could lead to an average weight loss of 2 pounds, making this small change could be a big deal.
Okay, there is a good point here and a really bad point here.  The good point is that added sugar seems to be a bad thing.  It promotes tooth decay (with 2 root canals, I can say that this is a big deal), it seems to be efficiently absorbed, it is associated with diabetes (a disease you really do not want), and it's nutrient value is null.

But the idea that a 20 calorie a day change will mechanically lead to a 2 pound weight loss in 3 years is kind of odd.  I mean it works, mathematically.  But it ignores all sorts of issues: like how does the body adapt to less intake, what foods are eaten (is it the same composition with portions shrunk by 1%?), and how this may alter activity levels.  The claim makes something that we know is hard sound very, very easy.

Programs like Weight Watchers seem to partially get good results by restriction, but they also seem to have incentives to change the composition of the diet.  Just look at how fruits and vegetables can be zero points in the current diet.

So, in an odd sort of way, the last point detracts from the main issue here: added sugars are bad and trying to expose your children to less of them is unlikely to be a bad thing.

Wednesday, January 11, 2012

Nice Observation

This post was hoisted from the comments in Megan McArdle's website and it makes a point that we often forget: the eventual chance of death is 100% and the hazard of death is tightly associated with how long you have lived so far.  Once you make it out of childhood, many people live their 20's and 30's free of serious health concerns.  It's not atypical to find such people, at least.  One the other hand, how common is it for 80 year olds to not have at least one health issue that effects either risk of death or quality of life?