Creative Destruction

February 2, 2007

Some fun statistics

Filed under: Blog Status,Content-lite,Navel Gazing,Statistical Method — Daran @ 4:12 pm

According to this post, there were 2.22 million posts and 1.3 million comments to blogs in January. That’s a little over 1 comment for every two posts.

Assuming the Pareto principle applies, we can infer that 1.78 million posts got just 0.26 million comments, which means that 1.52 million posts got no comments at all.

CD has 556 posts and 4946 comments according to our stats page. This gives us comment to post ratio of 8.9.

Interesting. That’s us.

January 16, 2007

Racism in the Electoral College: Not So Much

Filed under: Blogosphere,Debate,Race and Racism,Statistical Method — Robert @ 2:25 am

Rachel of Alas has a post about structural racism up for MLK Day. In the discussion section of that post, we get into it hot and heavy about the Electoral College and how it is, per Rachel, a “very good example of structural racism”. Why? Because more white people live in the small states, which are proportionally “whiter” than the rest of the country. In Rachel’s words, “It proves that whites votes count for more.”

Not really. Aside from the obvious logical flaw of assigning a weight based on skin color when it is in fact based on a geographic distinction (a black man who lives in Wyoming gets the same overweighted vote in the Electoral College as a white man), the numbers do not, in fact, support Rachel’s position. In the spirit of the “blue states give less” and “red states are dumber” statistical simplifications that go around the Web every time there’s an election (I’ve posted one or two myself), in a follow-up post she comes up with two tables purporting to show that all the small states are heavily white, and all the big states are less white, and thus the Electoral College deprecates the black vote enormously. (The actual quote from the first post is via a source who she cites approvingly, stating that “The Electoral College negates the votes of almost half of all people of color.”)

Again, it turns out, not really. In fact, not only not really – it’s pretty much a wash. Here is an exhaustive table of the states which have votes in the Electoral College. The first six columns are self-explanatory. “EV Weight” is an inverted factors showing the significance of a single person’s vote in that state, compared to the hypothetical “fair” number of people who should get 1 electoral vote if everything was even-steven. Numbers lower than one indicate that a person voting in that state has more than their “fair share” of input into the Electoral College; the winner here is Wyoming, at 0.31. The worst-off state is Texas, at 1.24. The “EV Over/Undercount” column indicates how many EC votes the state would gain or lose if everything were perfectly proportional (and if we could have fractional EC votes). The “White” and “Nonwhite Over/Undercount” columns indicate how many of those over or undervotes would be distributed among the racial balance of the state; if a state “should” have 10 more EC votes and is 80% white, then 8 of those votes are credited to the white column, and 2 to the non-white.

The point of all this was to come up with a picture of how the distribution of Electoral College votes would change if everything were proportional to population. That final number is damning for Rachel’s view of a world where the Electoral College is a huge structurally racist institutions: 4.80 electoral votes would shift, relative to population. That’s about 0.89% of the EC vote total. Check out the figures for yourself below the break.


October 27, 2006

Substantive Criticisms of the Lancet Report: Part 2

Filed under: Iraq,Science,Statistical Method — Robert @ 10:06 pm

Only a week later than promised (hey, I’m not getting paid), my review of the problems I see in the Lancet article on mortality in the Iraq war.

The article is much briefer than the study, which I examined here. So this review will also, theoretically, be briefer (cheers from the gallery). In fact, I only found three issues. However, one of them is potentially damaging to the study’s methodological choices (although I lack the mathematical skills to make a determination of that point), another casts direct doubt on the reliability of the authors’ reporting, and the third makes it clear that the study’s sampling method was not, in fact, random. These are major issues, in other words.

To repeat my disclaimer from last time:
I am not a trained statistician; any numerical analysis which crawls its way into this post should be viewed with a skeptical eye and read broadly and generally. I am skeptical towards this article’s conclusions on grounds of its consistency with the other things that I know, but this post is not about that inconsistency, and is instead a list of what valid critiques I can come up with against the study and the article. I have skimmed the IBC press release slamming the study, and have glimpsed other criticisms, but have not done any extensive reading in the “opposition research”.

Criticisms of the article which also apply to the first document I reviewed will not be repeated unless new information is noted.

1. The study authors selected a target survey size of 12,000 people in 50 clusters through the country. The sample size is adequate. The small number of clusters raises a statistical concern. With each single cluster contributing 2% of the total study data, any unusual cluster will have a disproportionately large effect on the total outcome of the study. The authors make the (legitimate) point that movement in Iraq is difficult and dangerous, and word-of-mouth about the benign purpose of the interviewers propagating through the households of each cluster reduced this risk, an effect which would be greatly attenuated by a larger number of clusters. That is true, but immaterial to the degree of confidence we can have in the study result.

The mathematical statistics needed to figure out how many clusters you ought to use in a study are complex. An article in the International Journal of Epidemiology provides a nomogram (that there is fancy language for a “chart”) that tells you how many clusters you should use for a given prevalence rate (how often you expect to find what you’re trying to find), design effect (how much variation your methodology will create relative to an ordinary random sample), and cluster size (number of respondents per cluster). I do not know the design effect value, but we do know the prevalence rate (about 2.5%) and the cluster size (about 240). For middling values of design effect, the nomogram suggests between 125 and 1500 clusters be used.

It will take a better statistician than your humble correspondent to nail this one down, but it does seem plausible that the number of clusters selected is inadequately small.

2. On page 2, the study authors detail their selection methodology. Each cluster’s origin point was selected from a province and then a town weighted by population (fair enough). The cluster’s starting household, however, was picked in this fashion: “The third stage consisted of random selection of a main street within the administrative unit from a list of all main streets. A residential street was then randomly selected from a list of residential streets crossing the main street. On the residential street, houses were numbered and a start household was randomly selected.”

This is hugely problematic. If you do not live on a residential street which adjoins a main street in your town, then your household is excluded from the statistical universe the study is measuring. The study did not sample Iraq; it sampled the subsection of Iraq that happens to adjoin a major road in town. This is a problem for a study attempting to measure anything, but in the case of a study measuring wartime fatalities, it is a critical flaw. Main streets are densely populated areas. Densely populated areas are the locales to which insurgents in an urban conflict flock. There’s no point in carbombing Farmer Ahmed’s cow; you go to the market. Which is on a main street.

The study authors could have at least partially corrected for this non-random element of their sample by assessing the proportion of the Iraqi population that could have been sampled by this method, and using that total population figure in their overall calculations. They did not do this, and in fact make no mention of the non-random element of their selection.

This is a serious objection to the study’s validity; the most serious I have found.
3. Also on page 2, the study authors write “The survey purpose was explained to the head of household or spouse, and oral consent was obtained. Participants were assured that no unique identifiers would be gathered.”

This is problematic.  Not intrinsically, but because it directly contradicts claims made by the study authors concerning their validation work on the study, specifically in the area of detecting and accounting for multiple accounts of the same death. Study author Burnham, in a media interview (h/t Amp), said “Double counting of deaths was a risk we were concerned with. We went through each record by hand to look for this, and did not find any double counting in this survey. The survey team were experience in community surveys, so they knew to avoid this potential trap.”

If no unique identifiers were gathered, then it is not possible that they went through and checked for duplicates. Either they lied to the respondents, or they lied to the press, or their article inaccurately reflects the methodology that was in place.

Overview and Conclusion

When I completed the first half of this critique, my overall impression was that there were some issues with the study that I found troubling, specifically the strength of their claims regarding the study’s validity and the difficulty their methodology created for other researchers attempting to verify their work. However, I thought that on balance the authors had done an adequate job of a very difficult task, and that – while their numbers were probably a little bit high – they were on the right lines.

I am forced to reconsider that proposition. The exclusion of an indeterminate, but large, fraction of the Iraqi population from the study’s potential range of survey respondents – particularly in view of the fact that the excluded fraction is also the group most likely on common-sense grounds to have avoided mass fatalities – is extremely troubling.  It isn’t a priori proof that the study authors are dishonest or incompetent; it is proof that the study does not measure what it purports to measure. What appears to be an attempt to cover over another flaw, the impossibility of avoiding duplicate reporting under the study’s purported methodology, amplifies my concerns about the study’s integrity.

What are the real civilian casualty figures in Iraq? “Depressingly high” is an unsatisfactory answer, but until someone conducts a proper population-based study, that’s the best we have to go on.

October 17, 2006

The Great Wall Of China Fallacy

Filed under: Iraq,Statistical Method — Ampersand @ 3:09 pm

From Gateway Pundit (with a curtsy to Crooked Timber):


October 11, 2006

NY Times Coverage Biased Against Lancet Study

Filed under: Iraq,Statistical Method — Ampersand @ 10:16 am

UPDATE: The Lancet Study can be downloaded here (pdf link). A companion paper, which provides some additional details, can be downloaded here (pdf link).

The New York Times coverage of the new Lancet study of Iraqi deaths, while maintaining an objective tone, is heavily slanted against the study; many of the painfully bad right-wing arguments against the earlier survey are repeated by the Times, usually without rebuttal. For example:


October 10, 2006

Misleading nonsense at Firedoglake

Filed under: Politics,Politics and Elections,Statistical Method — bazzer @ 9:53 am

If Connecticut wants to oust Joe Lieberman for his support of the war, then fine. Many of his critics, however, seem worried that the war alone might not be sufficient, so they’re hurling everything they can at him hoping some of it will stick.

This trend reached its ludicrous apex, in my opinion, in this Jane Hamsher piece posted at Firedoglake.

Now I’m no fan of Joe Lieberman, but this strikes me as a grossly unfair and disingenuous abuse of statistics. Hamsher slams Lieberman because Connecticut sends more money to Washington than it gets back by a higher ratio than almost any other state.

True enough, but this ratio tends to increase as a function of a state’s wealth. Richer states tend to have a net outflux of dollars to Washington and poorer states a net influx. Connecticut is, by some measures, the richest state in the union, and in an indirect way, that is why Hamsher is slamming Lieberman.

Maybe it’s just me, but I find that pathetic. Perhaps it’s just desperation, as Lieberman’s lead four weeks out is beginning to look insurmountable. Perhaps when your “referendum” on the Iraq war looks as if it won’t turn out the way you want, you start urgently trying to make it about other issues as well. Still, criticizing Lieberman for not turning Connecticut into Mississippi seems like a bit of a stretch to me.

September 19, 2006

Poor Methodology In Anti-Divorce Study

Filed under: Statistical Method — Ampersand @ 3:38 pm

Last year, on CNN’s “Anderson Cooper 360” show ((“Anderson Cooper 360,” November 22 2005.)), Elizabeth Marquardt, author of Between Two Worlds – which is being re-released in a trade paperback edition this month – had this exchange with Cooper:

September 16, 2006

What the ICRC really tells us about War Casualties

Filed under: Feminist Issues,Statistical Method,War — Daran @ 8:43 pm

In recent posts, I’ve been debunking the myth – mistakenly attributed to the ICRC – that women and children are 80% of war casualties. Here I summarise and discuss the findings of four papers from the peer reviewed British Medical Journal, all of which which were based on patient data from Red Cross and Red Crescent Hospitals. (See also my Analaysis of the figures given in the Lancet study on the war in Iraq.)


The data are consistent with the hypotheses that upwards of 75% of war casualties are adult men, and that upwards of 90% of war casualties are male. See below for detailed findings and discussion.

September 13, 2006

Evolution of a Myth: More on that 80% Figure

Filed under: Feminist Issues,Statistical Method,War — Daran @ 5:00 am

Unsatisfied with merely showing that the claim that “80% of war casualties are women and children” was misattributed to the ICRC. I decided to see if I could trace the statistic back to its origin. After all, the figure could still have a basis in well-founded research. After several hours of intensive Googling, I was able to trace it back to a claim in a 2002 edition of the Refugee magazine published by the UNHCR, which itself was derived, at least in part, from a UNICEF report published to 1996. The claim in the UNICEF report, however states only that women and children are 80% of displaced people. Finally I unearthed the real ICRC figure, which is that less than 26% of war casualties are women and children.

Below the fold, I describe my search in more detail

September 8, 2006

Data not bad anymore, but misleading none the less

Filed under: Economics,Statistical Method — Adam Gurri @ 6:09 pm

Yes, I’m surprised I still exist, too.

Anyway, saw Ampersand’s correction, and had me a look at the Kevin Drum Analysis.

It’s misleading, and I will tell you why: look at the census data he’s drawing on.  We are not talking about the Median Income levels for all citizens of the United States.  We’re talking about Median Income…for each household.

I’m not splitting hairs.  There are a number of reasons why the average income for a household might decline that would in no way suggest a decline in the standard of living.  If the proportion of low-income people who are buying their own houses increases, then the average might decrease, but the people who already owned homes are not any worse off than they were before.

It could also be that more single people are buying their own homes.  If you have fewer income earners in each house, then naturally the average “per house” goes down, but comparing a house owned by a twenty-something single IT student with a house occupied by a family of five would be rather unproductive if you’re attempting to get a sense of general standard of living.

But enough talk.  Let’s look at some information.


September 7, 2006

Survey Says … (in honor of Labor Day)

Filed under: Content-lite,Politics,Statistical Method — Brutus @ 5:40 pm

Undernews, the online report of the Progressive Review, has the following polling results, which I quote in whole and to which I add my comments. This isn’t my usual cup of tea, as I object to polls in principle and their reliabilty on methodological grounds. However, for idle consideration, these data provide a curiously (if not characteristically) sad snapshot of consensus belief in the U.S.

Fifty-eight percent of Americans have at least a somewhat favorable opinion of labor unions while 33% disagree and have an unfavorable view … By way of comparison, 69% of Americans have a favorable opinion of a company the unions love to hate-Walmart. Twenty-nine percent have an unfavorable opinion of the retail giant. Forty-eight percent (48%) have a favorable opinion of General Motors while 21% hold the opposite view.

Walmart has garnered probably more than its share of bad press over the years. Walmart and Bill Gates are both emblems of America, each in their own way, and we fixate on them to the exclusion of other similar actors on the national and international stage. So in answer to the radically reductive, false dualism question favorable or unfavorable? more people like Walmart than like labor unions. Perhaps that means we care more about how we spend our money than how we earn it. At the very least, you just can’t beat with a stick those low, low prices for earning goodwill. As to GM, it could only dream of a 48% market share, as more people in the U.S. buy Toyotas than anything else (I think — didn’t really check).

The volunteer Minutemen who organized patrols of the Mexican border are viewed favorably by 54% and unfavorably by 22%.

Why this is stuck in there I can’t say. Paranoia over illegal immigrants taking our high-paying union-pedigreed American jobs? Those familiar with the Minutemen know that the tone and affect of those folks ranges from sport to vigilantism, which hardly reflect the mostly favorable assessment of the public. I fully recognize that undocumented immigration and border crossing is illegal, but those aliens aren’t exactly vermin, so the scorn heaped upon them is a bit much for my taste.

Fifty-three percent (53%) of men have a favorable opinion of labor unions along with 61% of women. White Americans are less likely to have a favorable opinion of unions than others. Thirty- and-forty-somethings have less favorable views than those under 30 and over 50. This year, 38% of Americans say they celebrate Labor Day as a time to honor the contributions of workers in society. Forty-five percent celebrate the holiday as the unofficial end of summer.

Curious how the data is split according to demographics, namely age, race, and gender. Either we’re not supposed to notice or pay attention to those things because they’re merely cultural constructs (or destructive cultural constructs) or they matter a lot. I can never remember which for sure, as the PC response varies widely. That opinion shifts from those old enough to remember unions actually working to protect workers to the younger set for whom unions have now become just another dues-collecting bureacracy is no surprise. The union movement isn’t quite dead, but it’s been pretty well gutted.

Similarly, we don’t generally know anymore that Labor Day was first celebrated in 1882 to honor the sacrifices of labor to obtain safe and fair working conditions we now mostly take for granted. Most of the rest of the world celebrates labor on May 1, but we have now come to understand that Labor Day merely marks the end of summer and the start of the back-to-school season. That result echoes the transition of Christmas, Easter, and Halloween from sacred to manifestly secular holidays celebrated even by those of non-Christian faiths or no faith at all.

July 22, 2006

The Lancet study: My $0.02

Filed under: Statistical Method,War — bazzer @ 7:37 pm

Beset with personal and professional obligations of late, I’ve been shamefully absent from this blog for so long now that all of you have forgotten me altogether. The current debate about the famed Lancet article is a fascinating one, however, and it seems as good a time as any to dive back in.

Applying rigorous statistical methodology in the soft sciences is always fraught with challenge. That’s not to say that it can’t be done effectively (it can, and it should!) but it’s very much a different ballgame. My training was in the hard sciences, where the application of statistical analysis is fairly straightforward. When my wife, the social scientist of the household, asks for statistical help in her research, I always find it more daunting than quantifying (say) nuclear reaction rates.

This particular study has been a lightning rod for controversy since its publication, as partisans from both sides of the divide bring their own ideological biases into the debate, and the whole discussion devolves into a meta-argument that has more to do with the political axes the participants have to grind than with the article itself. We saw it clearly here on this blog, as intense passions were inflamed over an argument about statistics.

For my part, I found the Lancet paper itself largely unremarkable, in both its methodology and its conclusions. I think the researchers did a reasonable job considering the nature of their study. At the end of the day, they essentially said they had a 95% confidence level that the number of civilian deaths fell within the range of 8,000 to 198,000.

Granted, those are large error bars, but that often happens in scientific research. The real crime in the reporting of these findings came from ignoramuses who mindlessly took the unweighted mean of these two figures, which led to countless media headlines screaming “100,000 Iraqi Dead!” and provided the anti-war left with a convenient cudgel of a talking point, lent undue legitimacy by Lancet’s respected status within the world of medical journals.

I think that’s what really got Kaplan’s drawers in a knot, and understandably so. I do think many of Kaplan’s criticisms of the report’s methodologies missed their mark, but I also think Adam was too quick to apologize for linking it into this discussion. Scientists are a cantankerous lot, and as someone who has refereed scientific papers for peer-reviewed journals myself, I can assure you that arguments very similar to Kaplan’s were given full voice behind the editorial scenes before the final draft was published. They were an integral part of the discussion then, and there’s no reason they should not be part of the dialog here as well.

The Lancet Article

Filed under: International Politics,Iraq,Statistical Method — Adam Gurri @ 1:44 pm

As I unintentionally walked into a debate on this issue, I thought I’d take the time to look at it by itself.


July 21, 2006

The Era of Passion

Filed under: Debate,Statistical Method — Adam Gurri @ 6:26 pm

This will be a continuation from my previous post.
Daran’s response in particular deserves to be looked at here. In it, he argued that I was simply setting the standards so high that no amount of information could realistically meet them; that what I was doing was tantamount to what tobacco companies have done every time they argue that there is no “real proof” that smoking increases the risk of cancer.

I obviously have not presented my argument very accessibly. I will attempt to remedy that immediately.


July 16, 2006

Judging Iraq

Filed under: Debate,Statistical Method — Adam Gurri @ 4:05 pm

Howdy everyone!  Know I haven’t been the biggest voice around these parts for a while, but I figured I’d just dive in.


May 30, 2006

Predictive power

Filed under: Economics,Science,Statistical Method — Adam Gurri @ 4:52 pm

I've started to look over the Prediction Markets.

What a fascinating phenomenon!  These buggers are apparently quite accurate, and Google has had their own internal one to help keep ahead of the game.  By all accounts, the larger the quantity of people involved, the more accurate they become.

The intelligence community has had already had one failed courtship with this new approach to aggregating information.

As they have been consistently behind the curve for as long as I can remember, it's somewhat tragic but nonetheless unsurprising that they would be unwilling to even try a new idea.

At any rate, I've been captivated by my obsession of the moment.

May 26, 2006

Going Over to the Dark Side

Filed under: Blogroll,Statistical Method — Brutus @ 3:58 pm

I posted in the past on the fallibility of using numbers to win support for one's arguments. So it's a little ironic, I suppose, that I've found a third blog based on statistical methods to add to our blogroll to fill my quota: Freakonomics. The idea behind the book of the same title and the blog is that by crunching enough numbers and controlling for enough factors, the truth behind seemingly obvious cause-effect relationships can be revealed. From the blog:

[E]conomics is, at root, the study of incentives — how people get what they want, or need, especially when other people want or need the same thing. In Freakonomics, they set out to explore the hidden side of — well, everything … Freakonomics establishes this unconventional premise: if morality represents how we would like the world to work, then economics represents how it actually does work.

Happily, the authors appear to limit their inquiries to economics, as opposed to just any sort of numerical evidence. For instance, I have a real problem with polling in particular. I've been called upon to participate in quite a few phone polls over the past few months, and I really object to the way certain questions are posed and the way the answers require a virtual shoehorn to fit within. Just one example: in preparation for a speech I recently gave, I found a poll conducted by CBS News/New York Times earlier this year that had two interesting (if problematical) questions:

In general, how much trust and confidence do you have in the news media — such as newspapers, TV, and radio — when it comes to reporting the news fully, accurately, and fairly: a great deal, a fair amount, not very much, or none at all?


In general, how much of the time do you think the news media tells the truth: all of the time, most of the time, only some of the time, or hardly ever?

Numbers reported in response to the questions were somewhat divergent, but I can't say that the nature of the questions themselves are so different. Mere syntax was enough to produce divergent results. Those results were also broken down by political affiliation (Rep., Dem., and Independent), which I find questionable, since even though one can register with only one party, one may not adhere strictly to only that party's positions.

At any rate, the Freakonomics blog will make a good addition to our blogroll for its clear-eyed, apolitical approach to examining evidence and busting myths through unflinching statistical methodology.

April 2, 2006

Calling Motive into Question

Filed under: Debate,Statistical Method — Adam Gurri @ 9:48 pm

From Brutus' recent post:

"His insistence that he has his numbers right misses the point of the argument. Plus, it reinforces my belief that numbers often do not show the truth behind an assertion."

"(…)He showed us what he wanted us to see. In some circles, that is called lying with numbers, and it is disingenuous."

My obsession, in the end, is with neither politics nor any particular specific field–but rather, with method.  And while I appreciate the fact that Brutus utilizes a solid methodological critique in his argument, I still believe that his approach to debating and analyzing this issue could be greatly refined.

First, his assertion that "numbers do not show the truth of an assertion."  Statements like this bug me.  People love to drop the "lies, damned lies, and then statistics" line, but it is wholly counterproductive.  If you can't aptly support a point that you're making with evidence, then why say anything at all?  How can you possibly qualify any thesis under any circumstances?

The fallacy of this argument, of course, is that the only way to demonstrate it is by showing how the numbers used by one's debate partner are inaccurate.  But in order to do that, you have to use more numbers, and show what they mean in their proper context.

Getting nihilistic about statistics (hey, I made a rhyme!) is completely counterproductive.  And calling someone "disengenous" or questioning their motives puts an unnecessary stain on the discussion.  It's easy to honestly utilize an inaccurate method of analysis, and it should be remembered that you yourself might be doing this when you're attributing your difference of interpretation to a flaw in their character.  It is nowhere written that the confidence you have in your conclusions means that you are correct.

Set your standard.  Look at long term trends, rather than a few specific moments.  And then explain why those trends mostly support the standard you set up for how to judge your thesis.

In this case, Brutus argued that tax-flattening had continued unabated since the 1960's.  If we look at Bazzer's comment,  it seems to me that we could break it down like this:

  1. Thesis: Tax flattening has not continued unabated
  2. Standard: If the top marginal rate is higher now than it was when Reagan left office, the thesis is probably true.
  3. Evidence: In Reagan's day it was 28%, now it's 35%
  4. Conclusion: "continued unabated" is an inaccurate analysis of the situation

I would break down Brutus' response as follows:

  1. Thesis: Bazzer picked out his numbers carefully to make a disengenous argument for what he believes, thus providing a good example of why numbers are bad evidence.
  2. Standard: If we look at the long-term trend since the 1960's and see that it is geared mostly towards a declining top marginal rate, the thesis is probably right.
  3. Evidence: This chart of tax rates since 1913.
  4. Conclusion: There has been a long term trend towards a flattening of the income tax, and Bazzer's decision to use a smaller timeframe demonstrates the folly of numbers as evidence for anything.

This is a very good methodological critique, when it comes to making a case for a long-term trend in our tax structure.  It falls short, however, when discussing the person he is debating with.

There are many reasons that Bazzer could have chosen Reagan as an example.  If you were simply to point out the inaccuracy of his method, that would be excellent–and the link you provided is a goldmine of useful information.  But to take the next step and call it "lying with numbers" makes certain assumptions about motives.

Let's say, for the sake of argument, that this analysis of motives could be broken down into the following argument:

  1. Thesis: Bazzer has questionable motives for making the argument the way he did.
  2. Standard: If it can be demonstrated that Bazzer simplified the information in a way that made it appear more favorable to his interpretation, then he is lying with numbers.
  3. Evidence: Bazzer used two specific numbers in a twenty year timeline and they supported his point, even though a look at the longer trend shows the opposite tendency than what he is arguing.
  4. Conclusion: His character flaw has led him to support his analysis with inaccurate evidence.

Do you not see the frivolity of this?  You can't ever know why people do what they do.  You can know what they've said and done, however, and you can criticize and analyze their interpretations on the merits of their arguments.  In this way, information is shared and analytical standards are refined.

Since any evidence you use to support your argument about the person's motives can only be simplistic and inaccurate, the fact that you are making that argument about a person who you disagree with, on the merits of the very argument you're putting forward, calls your own motives into question.  This has no effect whatsoever except to help people rationalize why they ignore the people they disagree with.  And if you aren't ignoring them, then why on Earth would you want to make this sort of personal accusation which you can't at all demonstrate to be accurate?

Blog at