Monthly archives: February 2004
Competitive Balancing Act I, Scene I—The King James Version: An Overview of the Literature
2004-02-24 13:49
Other entries in the series: Competitive Balancing Act I—The King James Version: An Overview of the Literature, Scenes I, II, and III I called the New World into existence to redress the balance of the Old. —George "Oil" Canning I was watching "Real Time with Bill Maher" the other day, and Bill's guest for one segment was Republican Senator George Allen, son of the Redskin and Ram coach of the same name. Maher, citing the Alex Rodriguez trade, posed a question—well, I guess it was more of an analogy. He said that he always viewed the Republican party like the baseball: the rich get richer and the rest are left to their own devices. Allen responded that they were closer to the more egalitarian NFL: everyone gets a chance. (Or as we used to say in school, "So-and-so is like a doorknob. Everyone gets a turn." But maybe we intended something else there.) Allen then nauseated the audience with a stream of obviously pre-canned football analogies about how great the Republicans are that must go over big on Rush Limbaugh, right before the segment denigrating a new African-American quarterback each week (I know that was on ESPN not on Limbaugh's show). Anyway, the idea that the NFL was a model of parody—oops typo—parity and that baseball was ruled by the Yankees and a handful of others running roughshod over the rest of the teams is so ingrained in the American collective conscious that even politically incorrect shows accept it without question. This is the NFL that has a perennial underclass of teams in Arizona, Cincinnati, Detroit, San Diego, etc. Meanwhile in baseball, the Yankees haven't won a World Series since 2000 and "small market" teams like the Marlins and Angels have won the last two seasons. But baseball, mostly through its own doing, gets no credit. Running down the competitive balance in the game was a personal hobby for Selig for the two seasons prior to the negotiations for the last Collective Bargaining Agreement in 2002. It seems that he did such a good job that his alleged efforts to promote the game ever since the CBA was signed still are not making a dent in the din. The general zeitgeist seems to hold that baseball is not occupying an even playing field so to speak. I happen to disagree with this position. I started a still-born study on the issue last year and would like to revive it and hopefully finish it before the season starts. Here's where I left off: The financial results of the past season prove that salaries must come down. We believe that players in insisting upon exorbitant prices are injuring their own interests by forcing out of existence clubs which cannot be run and pay large salaries except at large personal lose. The season financially has been a little better than that of [the previous year], but the expenses of many of the clubs have far exceeded their receipts, attributable wholly to high salaries. In view of these facts, measures have been taken by this League to remedy the evil to some extent for [next season]. —NL President William A. Hulbert, September 29, 1879, announcing the adoption of what he called the "uniform player contract" but which became known as the "reserve clause" after a league meeting in Buffalo. Today baseball woke up and recognized there was an 800-pound gorilla sitting in our living room — the lack of competitive balance in the game. Let’s cure some of the problems. Enough is enough. Baseball has been for too long a big, old oil transport ship that takes forever to turn. Bud should take the rudder and turn ASAP. — Larry "Luke" Lucchino (Get it? He fights the Evil Empire), then of the San Diego Padres (he's since been promoted to Red Sox), January 20, 2000 By every measure, baseball is in the midst of a great renaissance. Never has the game been more popular. We set a new attendance record in 2000, drawing nearly 73 million fans to our ballparks. More fans attended Major League Baseball games than attended the games of the other three major professional team sports combined. When you add the 35 million fans drawn by minor league baseball, the aggregate number of fans that attended professional baseball is nearly 110 million. In the so-called halcyon days of New York baseball in 1949, the three New York teams—the Yankees, the Dodgers, and the Giants—drew a combined 5,113,000. Last season, the Yankees and Mets drew 6 million. The only set of circumstances—and I have often said this, Senator —that can impede this great renaissance is our inability to solve the problem of competitive imbalance. During the past decade, baseball has experienced a terribly disturbing trend. To put it simply, an increasing number of our clubs have become unable to successfully compete for their respective division championships, thereby making post-season appearances, let alone post-season success, an impossibility. The enduring success of our game rests on the hope and faith— key words here, ‘‘hope and faith’’—of each fan that his or her team will be competitive. At the start of spring training, there no longer exists hope and faith for the fans of more than half of our 30 clubs, and we must restore that hope and faith. The trend toward competitive imbalance which is caused by baseball’s economic structure began in the early 1990’s and has consistently gained momentum. Indeed, as I testified in 1994 before members of the U.S. House of Representatives, baseball’s economic problems have become so serious that in many of our cities the competitive hope that is the very essence of our game is being eroded. Unfortunately, baseball’s economic problems have only worsened since 1994, and for millions of our fans the flicker of competitive hope continues to become more faint. The competitive imbalance problem is one that, if not remedied, could have a substantial effect on the continuing vitality of our game. — Baseball Commissioner "Concealment, like a worm i' the" Bud Selig (from Twelfth Night) at the Senate hearings on competitive balance (i.e., Hearing Before The Subcommittee On Antitrust, Business Rights, And Competition Of The Committee On The Judiciary Competitive balance has been a concern in baseball almost since the beginning of the sport as an organized concern. In two years leading up to last year's labor dispute and the resulting collective bargaining agreement, Bud Selig and the owners painted a dreary picture of baseball's immediate future. Whether this was just a negotiating tactic or an airing of the sport's laundry in public, it did seem to correspond to the year leading up to the first deadline for the old CBA. And when the deadline was extended in the wake of the September 11th tragedy, the rhetoric seemed also to be extended. Talk of competitive balance since the new CBA is of how the Angels rode the crest of positivity that the improved competitive balance engendered. (You might guess my stance on the issue from these observations.) I am belatedly starting a series on competitive balance to examine how balanced the game has been historically and how balanced it remains today. The first section of the competitive balance series will review what has been written up until now regarding the topic. Of course, it will be in my own idiomatically irreverent style. I will review the findings of baseball's independent Blue Ribbon Panel of MLB insiders, paleontologist Stephen Jay Gould's explanation for the death of the .400 hitter, Bill James' 1990 analysis on competitive balance, economist Andrew Zimbalist's take on the state of the game, as well as a few papers that I have found online from non-professionals. The Report of the Independent Members of the Commissioner's Blue Ribbon Panel on Baseball Economics (July 2000) and Its Updated Supplement (December 2001)"Professional baseball is on the wane. Salaries must come down or the interest of the public must be increased in some way. If one or the other does not happen, bankruptcy stares every team in the face." —Chicago White Stockings nee Cubs owner A.G. (as in America's Game and "almost god") Spalding, 1881. "Unless something happens, we're all going to be out of business. When you have as many teams as there are losing money, something has got to give." —Cleveland Indians chairman Patrick J. "Don't Call Me Shaquelle" O'Neill, 1985. "The one thing we know today is we can't continue to do business the way we have in the past." —"Hey Bud, let's party" Selig, 1992. First, let me say that this is the finest work (or works) of fiction on the list. And the fiction starts line one, page one: The Commissioner's Blue Ribbon Panel on Baseball Economics, representing the interests of baseball fans, was formed to study whether revenue disparities among clubs are seriously damaging competitive balance, and, if so, to recommend structural reforms to ameliorate the problem. If you believe that, then you must believe that the commissioner acts solely in the best interests of baseball and that George W. (the other one) was looking out for Joe Lunchpail when he instituted his tax rebate and the doublespeakiest of all, the Patriot Act. There are sixteen men on the panel (listed on p. 54), twelve of which work on or own a major-league team. The four independent members that are the sole members listed on the front of the report (Richard C. Levin, George J. Mitchell, Paul A. Volcker, and George F. Will) are not without their ties to the sport as well. Doug Pappas has a great review of these documents at his site, in which he points out: The four "independent" members are Yale president Richard C. Levin, who drafted the owners' 1989 salary cap proposal; former Federal Reserve chairman Paul Volcker, who represented the owners on the last blue-ribbon economic panel, in 1992; former Senator George Mitchell, often mentioned as a possible Commissioner; and columnist George Will, who in a remarkable conflict of interest serves on the boards of both the Orioles and the Padres. I'll have to defer to Kramer regarding Mr. Will: Kramer found him "attractive", but "I don't find him all that bright." I'm sorry to bring down the collective IQ of those reading this study. Anyway, the period covered by the study corresponds to the period since the signing of the last CBA (or at least the peace presaging the previous CBA), 1995-1999. It was released in time to become the basis for the war over the last CBA (which was to be signed at the end of the 2001 season but because of the September 11 tragedy was postponed for one year). So clearly it was envisioned as propaganda. Even though some of the recommendations of the panel (more revenue sharing and no salary cap) were less extreme than the owners' original bargaining position and even the final position agreed upon in the CBA. However, maybe these men who have an incestuous relationship with baseball can still view the facts objectively. Their mandate is as follows after all: [T]he Independent Members were charged with studying the economic condition of the game and producing a report addressing the relationship between MLB’s current economic structure and competitive balance, and the ramifications of the current economic system for the future growth, health, stability and competitive balance of Major League Baseball. Well, there's one problem with that theory: the facts they use for baseball's finances are solely based on what the clubs reported. Therefore, there are only three teams who claim to have made money over the period covered (1995-1999), the Yankees, Rockies, and Indians. The Braves allegedly had the fourth highest revenue in baseball over that period and still lost about one mil annually. One has to wonder how the Braves deal with TBS is factored into the revenues/expenses. The Dodgers are alleged to have lost over $15 M a year 1995-99. And yet Rupert Murdoch bought the team for $350 in the middle of all these losses. And as Pappas mentions Financial World and Forbes estimated that baseball made $400 M over the same period. Also, there's the skeleton in baseball's closet that the panel does not discuss, the missing money. The report states: "Measured simply in terms of gross revenues, which almost doubled during the five complete seasons (1995-1999) since 1994, MLB is prospering. But that simple measure is a highly inadequate gauge of MLB's economic health." That may be but the panel never says why. They do go on about how the revenue disparity among the various quartiles in baseball grew over the four years even though the revenue for the teams at the lowest rung in the ladder still grew, just more slowly as if that were a shocking finding. They never discuss why the total revenues for baseball went from $1.4 B to $2.8 B (an almost $1.4 B increase) directly due to local revenue increases while total payroll went from $0.9 B to $1.5 B (an almost $0.6 B increase). Local Revenue is defined as "gate receipts, television, radio and cable fees, ballpark concessions, advertising and publications, parking, suite rentals, postseason, spring training and other baseball revenues." (p. 59) There's about a $0.8 B difference. That's more than half the revenue at the start of the four-year period and that's from the extremely conservative numbers that the clubs reported. Where did it go? Why isn't it even discussed in the report? Did it go only to the top quartile? They don’t say. They go on about sharing revenue when there's nearly and drafting prospects off the 40-man roster from better teams but there's almost a billion dollars that disappeared into the ether, enough to cure all of baseball's woes, and they don't even mention it. They do mention that "club debt nearly quadrupled over seven years, from $604 million in 1993 to $2.08 billion in 1999" (p. 12). And yet were expected to swallow that fabrication out of wholecloth without further elaboration. This is an industry in which sweetheart stadium deals are handed to club owners while the one real expense player payroll is being outpaced by revenues, and yet debt quadrupled. Didn't this send any red flags to the panel that the fiscal numbers were more Monopoly than Arthur Anderson (or maybe it's the reverse since Enron)? I forgot that they were so far in bed with the clubs that they would even enjoy spooning with Bud Selig ("Those aren't pillows!"—Planes, Trains, and Automobiles). I can't speak to the revenue numbers because they are a) not really available, b) too labyrinthine and c) all bloody bollix anyway (Sorry, I was just watching Angel and felt compelled to channel Spike). For many teams baseball is inextricably entwined with the owners' other businesses as evidenced by underpriced cable deals and the like. In a later section of the study, I will speak to the payroll numbers, which Pappas also points out is erroneously based on rosters (25- and 40-man) as of September 1 of the given year. This obviously increases the payroll numbers for contending teams who regularly add high-payroll for the stretch run even though they oftentimes will not even pay as much as the prorated salary of the mid-season acquirees. It's not as if all of the suggestions made by the panel are without merit. Revenue sharing is clearly the most logical if not unfortunately the most straightforward way for baseball to resolve its disparities. However, even though this augustly owner-favored panel recommends it, the owners are still too distrustful of each other to enact it. They also come down against contraction: "If the recommendations outlined in this report are implemented, there should be no immediate need for contraction." (p. 44; italics theirs) And they are for relocation: "Franchise relocation should be an available tool to address the competitive issues facing the game. Clubs that have little likelihood of securing a new ballpark or undertaking other revenue enhancing activities should have the option to relocate if better markets can be identified." (p. 43; italics theirs) That is counter to the de facto position of baseball, which has had an unstated 30-year moratorium on clubs relocating. Then again, baseball is all for the threat of relocating, to paraphrase Ian Faith, to prize new stadiums and other concessions out of the locals. However, I am not a big fan of a number of their findings. Their overall conclusions (p.1) are as follows: a. Large and growing revenue disparities exist and are causing problems of chronic competitive imbalance. (Italics theirs) b. These problems have become substantially worse during the five complete seasons since the strike-shortened season of 1994, and seem likely to remain severe unless Major League Baseball (“MLB”) undertakes remedial actions proportional to the problem. (Italics theirs) c. The limited revenue sharing and payroll tax that were approved as part of MLB's 1996 Collective Bargaining Agreement with the Major League Baseball Players Association (“MLBPA”) have produced neither the intended moderating of payroll disparities nor improved competitive balance. Some low-revenue clubs, believing the amount of their proceeds from revenue sharing insufficient to enable them to become competitive, used those proceeds to become modestly profitable. (Italics theirs) d. In a majority of MLB markets, the cost to clubs of trying to be competitive is causing escalation of ticket and concession prices, jeopardizing MLB's traditional position as the affordable family spectator sport. (Italics theirs) Wow, I'll agree that "small market" owners are lining their pockets with money that they could be re-investing into their teams. And I'll agree that the revenue sharing and luxury taxes of the old CBA did not a heck of a lot. I'll even agree that "[l]arge and growing revenue disparities exist." However, I have a problem with stating that the problems of competitive balance are "chronic" based on the finances. It may be true but I don't think you can base it on the shoddy factual information here. I also have a problem with their prediction that the alleged problems are "likely to remain severe" unless baseball intervenes. First, this was a great bargaining chip for the CBA negotiations. And second, the problems, if there are any, may just be cyclical. I mean we are talking about an era in which three teams dominated pretty well: the Yankees, the Braves and very briefly the Indians. Sure, a few others snuck in. The Marlins even won a World Series, but these teams accounted for eight of the ten World Series teams during the period. They also allegedly constitute two of the three money-making teams in baseball (and the Braves only lost money due to oppressive accounting). So maybe that confirms the claims of revenue disparity? Or maybe given the austere measures in Cleveland and Atlanta in the last couple of years, those teams just happened to have put together short-lived dynasties, that while they lasted resulted in greater revenues. That, of course, still leaves the high-spending, high-revenue Yankees unaccounted for. But if its one club that is a problem, how does that constitute a chronic problem. Besides couldn't that problem be cured by the creation (or rather relocation) of the New Jersey Expos? The last point that ticket prices and concessions are driven by the costs to remain competitive has been proven false by a Doug Pappas study that I'll go into later. However, I'll say that this assertion is patently ridiculous and defies over hundred years of baseball history as well as basic economics. Teams set ticket prices to maximize profits. If they could charge $1000 and sell out every night they would. However, that is unlikely. So what they do is try to maximize the money the take in. If that means fewer people attend but prices are higher, so be it. They know that higher prices will result in fewer fans in the stands. They estimate how many fewer and determine what price will maximize the final gate. There are some other assertions/findings in the study that cause me agita as well. The words "strong correlation" are used a bit: [T]here is a strong correlation between high payrolls and success on There also has been a stronger correlation between club revenues/payrolls and on-field competitiveness in the years since the issue of competitive balance was studied by the Joint Economic Study Committee which issued its report in 1992. (p. 12) If I remember my Statistics correctly, "strong correlation" is a term used to indicate that the relationship has a 90% confidence interval, that you can be at least 90% confidence that the a value will fall within a reasonable distance from a line drawn between the data points for the data. I see no indication that any such evaluation was performed and would be surprised if one even was. We will do an analysis of our own in a future section of the study. Then there's the other sport envy, a mania that Bud and the other baseball leaders have promulgated for some and made part of our collective sports culture. Baseball operates under an anachronistic economic model, unlike the NFL and NBA…The NFL and NBA have thrived with structures that allow franchises in widely different kinds of markets (including small media markets such as Green Bay and San Antonio) to succeed. (p. 6) An indicator of such balance would be a ratio of approximately 2:1 between the average payroll of the payroll Quartile I clubs to the average payroll of the payroll Quartile IV clubs…In recent years the NFL, which enjoys substantial competitive balance, has had a ratio of the average of the highest seven payroll teams to the average of the lowest seven of less than 1.5:1. The comparable figure for the NBA during the last three years has been less than 1.75:1. MLB's current ratio, using either 25-man roster payrolls or 40-man roster luxury tax payrolls, is in excess of 3.5:1. (p. 7) Well, a weak union and a salary cap will do that. Why is 2:1 ideal? And why is having a low payroll in and of itself a bad thing? Talented young teams sometimes have low salaries. Look at what the Indians did in the early Nineties. Look at the bloated salaries of the Mets and Rangers over the last few years or the Orioles a few years ago and then consider the level of success on the field those teams achieved. Besides this statement seems to be contradicted by this snippet in the supplement: In 2001, the ratio was closer to 3:1. In 1999, the last season examined by the Blue Ribbon Panel, the actual gap in average payroll between payroll Quartile I clubs ($78.8 million) and payroll Quartile IV clubs ($20.2 million) was $58.6 million. By 2001, the actual gap had grown to $64.4 million. The ratios were getting closer but the disparity was greater. So which is it ratios or the disparity? Besides, successful teams will probably have higher payrolls if you look just at the team in the successful season. The Marlins had a high salary in their one successful year in the period, 1997, and then had smaller payrolls and much less success in other years. If you look at the Indians in the Nineties, you'll see that they had success with a relatively small payroll and then with a larger payroll. Well, I'm getting ahead of myself: this will be looked into in a later section. One thing that this study does provide us is a definition for or rather a measuring stick of competitive balance: Proper competitive balance will not exist until every well-run club has a regularly recurring reasonable hope of reaching postseason play. (Italics theirs; p. 5) "Well-run" is hard to define, as is the overly alliterative "regularly recurring reasonable hope". I'll attempt to define the latter later on in the study and use it to measure competitive balance. "The Numbers" series by Doug Pappas"You go through The Sporting News for the last 100 years, and you will find two things are always true. You never have enough pitching, and nobody ever made money." —Donald "We've been kicking ass for 35 years; We're ten-and-one" Fehr "Anyone who quotes profits of a baseball club is missing the point. Under generally accepted accounting principles, I can turn a $4 million profit into a $2 million loss and I could get every national accounting firm to agree with me." —Paul "Double X" Beeston, as a vice president with the Blue Jays and later baseball's chief operating officer until similar open-mindedness got him canned. Pappas put out an eight-part series based on financial disclosures made by MLB for the 2001 season. This was to buttress Bud Selig's testimony before Congress over contraction. Remember the famous gagging of Donald Fehr over the players union's knowledge of the real financial statements and Rep. John Conyers' "We don't have the numbers, we don't have the numbers" attempts that fell on deaf ears? Conyers also soon after asked Selig to resign following the disclosure of a loan he received from Carl Pohlad in violation of baseball's rules. When Bud demurred, Conyers then suggested that baseball a) table the contraction issue for a year and b) remove the gag from MLBPA's collective mouth over the fiscal numbers because of concerns "that baseball's losses may have been overstated" and "that financial material exists which has not been turned over to the Committee." Bud's response was a tutorial in how to assert, "Of course," while really saying, "No frigging way!" Bud's own doublespeak, Buddlespeak. Anyway, back to Pappas. He poses, "MLB somehow managed to lose $519 million in 2001 despite record revenues of more than $3.5 billion. This claim was met with derision by virtually all independent observers… Are the books cooked? If so, how?" Pappas then proceeds to examine the financial disclosures item by item: gate receipts, local media revenues, postseason revenue, local operating revenue, player compensation expenses, national and local expenses, and interest expenses. His only issues with the revenues are the low cable revenues for the two superstation teams, the Braves and the Cubs as well as the Phils and Red Sox who control the cable outlets (and possibly the Tigers hiding some of their new-stadium money). However, Pappas really gets rolling when he starts to delve into the expenses. First, payroll: Player salaries are investments. A team that spends its money wisely wins more games, and in any market, a winning team means higher attendance and more public interest[,] which ultimately translates to larger media contracts and more money for the owner... A team which spends poorly, like the Orioles or Devil Rays, has the worst of both worlds: higher expenses without higher revenues. (Italics his) He also points out that two of the top three teams missed the playoffs and that the A's were 26th in payroll but had the second-best record in the majors. This flies in the face of the Blue Ribbon Panel's 1995-99 analysis. Pappas then introduces a formula to compare team payrolls, called marginal salary for marginal win. He feels this is an improvement over facilely dividing payroll by wins. Given that there is a minimum player salary that must be met, "it's impossible to spend $0 on a team." So Pappas divines that a team made up of league-minimum players would win 30% of its games (49-113) though he doesn’t explain where that figure comes from. His final formula is then: Marginal salary per marginal win = (Adjusted player compensation - $13,000,000) / ((Winning percentage - .300) x 162) With this method he finds that the Twins, who were last in payroll, got the most wins for the least money ($480K in marginal salary per marginal win). They were followed by the A's (26th in payroll but only $526 K per marginal win), M's (9th and $941 K), and Phils (23rd and $972 K). Meanwhile the Orioles, who were 12th in payroll, paid the most per marginal win ($4.53 M), followed by D-Rays (19th in payroll, $3.27M per marginal win) and Rangers (8th and $3.26 M). Rounding out the top four were the Red Sox, whose payroll was number one barely ahead of the soon-to-be-dubbed "Evil Empire" Yankees and whose marginal salary per marginal win was $3.12 M. This clearly shows that payroll does not translate into on-field success, at least not in 2001. It is much more in line with Pappas' payroll-as-investment theory. Finally, Pappas tackles what baseball terms "National and Other Local Expenses", but Pappas dubs a "black hole". It's basically every other expense besides team payroll and interest. He finds that expenses vary greatly and that not all of it can be explained by more funds being invested in the farm system and other valid expenses. Clubs seem to use this category for hiding accounting tomfoolery (e.g., the Cubs charge themselves three times the norm for advertising on their own TV channel and in their own newspapers.) He then uses the A's as an example of a streamlined front office: [T]he average club spends almost 50% more than the Athletics to achieve far less. If every club were to reduce its non-stadium-related overhead to Oakland's level, MLB would save more than $500 million. That they haven't is strong evidence that MLB is exaggerating its financial difficulties. Pappas is not enamored of baseball's revenue sharing plan for two reasons: First, it doesn't require recipients to try to compete: owners can simply pocket the money, treating it as a no-obligation subsidy. The second problem results from a definitional ambiguity. "Small-market team" can mean either "low-revenue team" or "team that plays in a small metropolitan area." The result is that "MLB's revenue sharing formula shortchanges popular, well-run teams in smaller cities while rewarding incompetently managed big-market clubs" and that the welfare-recipient Milwaukee Brewers end up being the most profitable team in baseball. I'm sure Bud Selig doesn't mind that a bit, no matter how little he may have had with that happy news. Pappas suggestion for fixing revenue sharing is to factor in market size. He also advises, "MLB needs to realize that badly run teams should lose money. Very badly run teams should lose even more." Finally, interest payments that comprise about a third of baseball's losses in 2001 are, in Pappas's view, a rococo of half truths and outright lies. For example, he explains that if the Twins were sold for $150 M, one half would be considered the price of the franchise for tax purposes and the other half for the player acquisitions costs for players already under contract, which would be a tax write-off. Therefore, what is considered a loss actually benefits the owner. (In this example, a $175 M loss is a $60 M gain.) Pappas promises to explain how "the owners' [then] current labor proposal…could actually reduce the competitive balance the owners claim to be protecting" in the last entry, but inexplicably never touches the subject (To quote Fred Willard in A Mighty Wind, "Wha' happened?"). Instead he compares baseball's numbers against a Forbes magazine report. Forbes estimates that baseball's $232 M in losses are actually a $76.7 M profit. The report was of course lambasted by Selig and the MLB reps. as "pure fiction." I had to include this part because it was too good to pass up: In the real world, where Selig wields his precious Blue Ribbon Panel report as a club to demand givebacks from the players and new stadia from the taxpayers of Minnesota, Kansas City, Miami, and Oakland (to list just the clubs threatened during Bud's 2002 Extortion Across America tour), any writer meeting the Commissioner's standards of "good journalism" should be fired. Unless and until MLB allows an independent outside auditor to review all its financial records, and to disclose the results publicly in a report whose contents are not subject to MLB's prior review and control, its self-serving statements should be afforded no more deference than those of any other special-interest pleader. I think Pappas's study counters any believability the Blue Ribbon Panel may have had left. But it leaves as back at sea. Next we will look at how paleontologist Stephen Jay Gould's investigation of the death of the .400 hitter could affect competitive balance. "The Model Batter:Extinction of 0.400 Hitting and the Improvement of Baseball" from Full House: The Spread of Excellence from Plato to Darwin by Stephen Jay Gould (1996)[T]he game hasn't changed...I see the same type pitchers, the same type hitters...I am a little more convinced than ever that there aren't as many good hitters in the game… They talked for years about the ball being dead. The ball isn't dead, the hitters are, from the neck up. —Teddy Ballgame in The Science of Hitting (pp. 11-12, 1972 ed.), on his experience as a manager (a quote that has taken on added nuance thanks to Williams' son). This quote faces a graph titled "Decline of the Hitter: From 1930 to the Present", which of course shows a steady decline in runs scored, homers per game, and batting average accompanied by a sharp increase in shutouts until 1968. However, the graph continues until 1970 and these trends all reverse, so to negate their impact, those two years are designated expansion years. (This is also disingenuous since there were no new teams in 1970, while the expansion years of 1961, 62, and 65 are not so designated.) Two things have pretty much taken care of the .400 prospect. One thing is called the slider…[the] second reason is the improvement of the bullpen. —Stan Musial, who never managed the Senators Skip: You guys. You lollygag the ball around the infield. You lollygag your way down to first. You lollygag in and out of the dugout. You know what that makes you? Larry! Larry: Lollygaggers! Skip: Lollygaggers. (shaking head in shame) —"Bull Durham" The .400 hitter is like Jacob Marley in A Christmas Carol: both are long dead but their spirit casts a pall over the present. Even though baseball just experienced arguably its offensive apogee in the last decade, one still hears throwback journalists decrying the quality of today's ballplayer. And without exception the rallying cry of the things-were-better-in-my-dayists is the death of the .400 hitter, the number that revisionist history has made into the watershed mark of the game. This subject may seem far afield of our topic of competitive balance. However, remember that implicit in the .400-hitter post mortem are all the issues that pertain to competitive balance: the quality of play, distribution of players, and the general advancement in training, equipment, strategy, etc. After all, as Gould poses, "Something terrific, the apogee of batting performance, was once reasonably common and has now disappeared…The best is gone, and therefore something has gotten worse. [However, I] claim that [the] extinction of 0.400 [sic] hitting really measures the general improvement of play in professional baseball" (p. 79). (This is of course a favorite of old ballplayers.) There are two conventional explanations for the demise of the .400 hitter: 1) What Gould dubs the Genesis Myth from "There were giants in the earth in those days" (Genesis 6:4). "In the good old days, when men were men…players were tough and fully concentrated…How could any modern player, with his high salary and interminable distractions, possibly match this lost devotion?" (pp. 80-81) 2) The second is the "tougher conditions" theory, "the claim that changes in play have made batting more difficult (the Genesis Myth, on the contrary, holds that the game is the same, but that the batters have gotten soft)…The three institutions of baseball that might challenge good hitting [are]…better pitching, better fielding, and better managing." (p. 82) They’re two sides of the same coin. Either players got softer or the game got harder, but either way, the offensive game is the worse for it. Right? Well, Gould disagrees: "The extinction of the 0.400 hitter measures general improvement in play (p. 81)…Isn't it more reasonable to assume batting has improved in concert with other factors in baseball? (p. 88)…If pitching and fielding have slowly won an upper hand over hitting, we should be able to measure thus effect as a general decline in batting averages (p.98)…[but] in fact, the mean batting average for everyday players has been rock-stable [with exceptions] throughout our century. (p.99) (Actually, a graph that Gould includes of the mean batting averages 1876-1980, p. 103, shows the fallacy of Williams' chart of the same for the period 1930-70, that I mentioned earlier, since it demonstrates fully that a small sample can lead one to certain conclusions—that batting was in decline—that evaporate when one pulls back and views the whole panorama of data.) Gould points to larger player pools, better training, increases in player size, and increases in records in other sports as a plausibility argument for a general improvement in the sport. He also develops a way of looking at peak performance as the "right wall" or physical limit that manifests itself in the data accompanying empirical measurements in sport (e.g., the batting average of a league leader or winning Boston Marathon times). Gould offers that our perception of what a .400 means is what is in error "by treating '0.400 hitting' as a discrete and definable 'thing,' as an entity whose disappearance requires a special explanation…In fact, our propensity for recognizing such a category at all only arises as a psychological outcome of our quirky propensity for dividing smooth continua at numbers that sound 'even' or 'euphonious'"...When we view 0.400 hitting properly as the right tail of a bell curve…then an entirely new…explanation becomes possible for the first time. (p. 100) Gould's explanation is that while mean batting averages remained steady but the averages became more clustered around the average and the variation for the extremes shrank. He started with a simple study of the top and bottom five averages and then went on to study the standard deviation based on all "regular" players. I do have a problem with this approach in general because as one adds more players playing more years naturally the standard deviation shrinks. However, I do think that Gould latched on to a true trend even though his statistical proof may overstate that trend. The trend is due to a few factors that Gould delineates. First, complex systems such as baseball improve over time and variance decreases. "In baseball's youth, styles of play had not become sufficiently regular and optimized to foil the accomplishments of the very best." (p. 113) He points to specialization and division of labor (though the trend he graphs on p. 115 of the decline in baseball players who fielded more than one position in a given year may have reversed itself in recent years due mostly to the overspecialization on modern pitching staffs—we have even seen the return of the pitcher/position player at the major league level). He also points to the steady decline in the standard deviation of team winning percentage since 1900. (Again as more teams play more games, this trend is overestimated, I think more here than with batting averages.) The second factor is that "as play improves and bell curves march toward the right wall, variation must shrink at the right wall… [A] 'right wall' must exist for human achievement. We cannot, after all, perform beyond the limits of what human bone and muscle can accomplish." (p. 116) I must point out that a .400 batting average is not like running a marathon at the speed of light. After all, a batting average is just based on probability. A .400 average occurs in short series like a playoff from time to time. And look at batting averages at the end of April every year. Small samples can do that. However, there is no physical limitation that prevents someone from batting .400. I can see the argument that as schedules went from 136 to 148 to 154 to 162 with various stops on the way, the probabilities along with the other factors mentioned finally caught up with the .400 hitter. But there are no more physical limitations today than there were in the day of Wee Willie Keeler. That would be batting 1.000, which it is physically impossible to exceed. That said, the era-defined right wall does exist though it is not defined by physical limitations of the players (actually this assertion seems counter to everything else in Gould's study) but rather by the probability-defining environment in which they play. I also agree that improvement in play has brought the average player closer to the theoretical right wall and decreased variation. To be continued….
Permalink |
No comments.
“Hall’s of Relief”—Final Analysis (Really), Pt. IV
2004-02-20 01:34
Previous entries:The 1870s, '80s, and '90s The 1900s and '10s The 1920s, '30s, and '40s The 1950s The 1960s The 1970s The 1980s The 1990s and 2000s 2003 Notes: Part I & II Final Analysis: I, II, III, and IV. Pens Envy: Baseball's Best BullpensNow let's use our new toy, Relief Wins, to determine the best (or at least most valuable) bullpen. Here are the teams with 10+ RWins:
You'll note that the vast majority of these teams are from the last ten years. That's because use of middle relievers over that time has skyrocketed. More innings mean more value. The 2003 Dodgers (Gagne, Quantrill, Mota, Shuey, and Martin) come out on top. I still prefer the 1990 A's pen. I had compared them to the "Nasty Boys", the 1990 Reds pen, in the Nineties section and came down in favor of the A's. This method supports that opinion as the Reds rank 59th at 8.62 RWins. Now here are the worst pens:
That's all good and well, but are the bullpens today overrated because of the extra innings? What if we divide the Relief Wins by innings pitched? Let's see (min. 50 IP):
So the 2003 Dodgers still end up on top. The '90s A's move up to second, and a Connie Mack team moves into the top 10. Now the worst:
Again pens have improved over time as teams devote better talent to the role. This is a good argument against the accepted position that pitching has become too diluted in baseball today, but that's the subject of another study. Finally, here are the best and worst bullpens by decade:
Here is a statistical breakdown for the best pens: 2003 Los Angeles Dodgers:
1990 Oakland Athletics:
1995 Cleveland Indians:
2002 Atlanta Braves:
1979 Texas Rangers:
1979 Baltimore Orioles:
2002 Anaheim Angels:
1995 St. Louis Cardinals:
1998 Colorado Rockies:
1926 Philadelphia Athletics:
2003 Anaheim Angels:
1982 Boston Red Sox:
1997 New York Yankees:
1981 New York Yankees:
2001 Anaheim Angels:
1999 Cincinnati Reds:
Now the worst: 1899 Cleveland Spiders (Surprise):
Pen Notes, Future Studies, And Other Bull2003 NL Cy Young Revisited: Let's take one last look at the past season's Cy Young candidates based on the wins above average calculations from this section. Here are the candidates with at least 5 RWins or SWins:
Now you can view this either a validation of Gagne's Cy Young worthiness or an indictment of the system I've laid out. As for me, I might be ready to say, "Uncle," and acknowledge Gagne was the best candidate. Almost. Recommendations: I've already made these recommendations within this tome, but I just wanted to collect them at the end: 1) Change the Save rule: SAVES FOR RELIEF PITCHERS One thing is clear to me after doing this study: The save stat is probably the most meaningless one that can be dreamt up. Bobby Thigpen's record-setting 1990 season (57 saves) was nowhere near the top reliever seasons (126th with 4.33 Rwins as a Robb Nen-type reliever). One could use that as an argument against this study, but I think most learned fans will agree that Thigpen's 1990 season, aside from the save record, wasn't all that spectacular. It was a very good season to be sure but not among the best for a reliever. Randy Myers 38-save 1992 season results in negative Relief Wins (-1.35). A 4.23 ERA (17% worse that the league average) will do that. Myers' 38-save, 3.88-ERA 1995 season wasn't much better (0.24 Rwin for an adjusted ERA 5% better than the league average). Jeff Reardon's career save totals are not taken that seriously because of season like his 35-save 1986 campaign: -0.57 RWin because of a 3.94 ERA, 6% worse than the park-adjusted league average). You get the point. An interesting study may be to determine if saves correlate to any tool like RWin that measures relief pitcher effectiveness. I didn’t run it here because I think I know what the results would be. It's clear to me that saves are the result of pitcher usage more than performance. The rule can be improved to measure results better. First, either get rid of or segment off the three-inning automatic save. It does measure endurance but does not necessarily capture the same idea as the rest of the save rule. Next, get rid of the one-inning, three-run-lead save. I propose that the save rule be changed so that the pitcher either enters the game at the start of a half inning with a one-run lead or enters a game mid-innings with the tying run either on base or at bat to earn a save. That's it. It's a rule informed by Bill James' research. And the historical stats should be changed retroactively. There is a precedent. The rule was tweaked before the 1973, 74, and 75 seasons. When the rule was established in 1969, a reliever was credited if his team maintained a lead while he was pitching. In 1973, the rule was changed so that the pitcher was "protect" the lead, i.e., that it didn't change hands. In 1974, the three-inning rule was put into effect. Also, for the first time a pitcher would receive a save only if the score was "close" when he entered. In '74 that meant that the tying run was either on base or at the plate in mid-inning appearances. (I have to dig through my pre-'74 Encyclopedias to determine when the three-run lead rule—10.20.3a above—came into affect). In 1975 the rule was enlarged to include the tying run on deck for mid-inning appearances. Anyway, the rule is an anachronism and will become more so as teams average four or five pitchers per game. We now have 30+ years of real historical data with the save rule. It's clearly a failure. Baseball should fix it now to revitalize the stat. The only argument I see against this position is Bobby Thigpen's hurt feelings. 2) Establish an Official Historic Hold Rule: It's time for MLB to add an official stat to help evaluate the myriad non-closer relievers on the planet. It should be added retroactively to the statistical record. Credit a pitcher with a hold when he enters a game in which his team leads by one run or the score is tied at the beginning of a half-inning or when the tying run is at the plate in a mid-inning appearance and he a) records an out and b) maintains the lead/tied status. A pitcher should not get a hold for walking the bases full with a one-run lead and then being saved by a reliever that replaces him. Also, it should not be credited unless any runners he allows are taken into account. Let's say pitcher A enters with a runner on first, one out, and a one-run lead. He strikes out the first batter and then allows a walk to the second. He is taken out and the new pitcher (B) gives up a home run. Pitcher A should not get a hold because his runner was the go-ahead run. Let's say B strikes out the last batter to save A's bacon, I'm on the fence as to whether A should get a hold or not. B should, but without the effort from B, A would have blown the lead. But then again the lead didn't actually change hands and A did have something to do with that (the strikeout). These are the sorts of things that need to be figured out, but an official stat that is clearly thought-out is a must. 3) Record historic relief statistics officially: All of the calculations throughout this section have been based on prorating the relief stats for those pitchers that started and relieved games (more than 50% of all pitcher seasons). Baseball has chosen to lump all pitcher stats together. This is fine when a pitcher is a pure starter or pure reliever, but not for swingmen. Baseball should go back through the historical record and either separate pitching stats by starting and relieving or create a separate set of statistics for each role while still keeping all pitching statistics lumped together. I'm sure this is high on Bud's to-do list. 4) Investigate Middle Relievers More Fully: Middle reliever seasons comprise about 75% of all reliever seasons. I have used Bill James' reliever archetypes for this study and those archetypes are based solely on relief aces or closers. There were no middle relief archetypes. Therefore, I created one based on a combination of the starter and the Client Brown-type reliever archetypes. I feel that the results for middle relievers (e.g., Paul Quantrill and Mark Eichhorn) overvalued their appearances. This may not be an issue for middle relievers from over 20 years ago, given that they were a sub-par crew for the most part. But as teams assigned better and better pitchers to this role, the stats started to look more inflated. Further studies and possibly an explicit middle reliever archetype is needed to determine if this is indeed the case. Sources Data: Sean Lahman's 2003 and 2004 MS Access baseball databases, Baseball-Reference.com. (I don't recommend doing any real database querying with Access. I prefer the dead/moribund FoxPro and Paradox dbs, but these are the conditions that prevail. It’s Microsoft's planet; we're just living on it.) The End? —Last frame of The Blob (Not to be confused with Terry Forster, "The Big Tub of Goo"). Let the end try the man. —"Hammerin'"Henry IV by William "Author" Shakespeare Not every end is a goal. The end of a melody is not its goal: but nonetheless, had the melody not reached its end it would not have reached its goal either. A parable. —Friedrich "Fat Freddie" Nietzsche
Permalink |
No comments.
“Hall’s of Relief”—Final Analysis (Really), Pt. III
2004-02-19 17:42
Previous entries:The 1870s, '80s, and '90s The 1900s and '10s The 1920s, '30s, and '40s The 1950s The 1960s The 1970s The 1980s The 1990s and 2000s 2003 Notes: Part I & II Final Analysis: I, II, III, and IV. ArchetypecastingOne result that I found interesting was a comparison of the Relief Win average for various reliever types over time. Here is a table per decade for each type:
I find it interesting that the average reliever didn't prevent runs better than the average pitcher all-around until the Eighties. Also, middle relievers as a whole have never been better than average in this area. The Relief Wins also show you when the various types took hold. A higher total indicates that the style was used more often and was successful. It also indicates that the better pitchers were assigned to the given role. The Nen type saw its Relief Wins climb in the Eighties and peak in the 2000s. The Sutter-type Relief Wins started to climb in the Seventies and then more than doubled in the Eighties before disappearing altogether. The Wilhelm type held sway in the Sixties but even though its numbers decreased it still captured high RWins until the specialization of the Nineties. The Face crew witnessed a lower peak but it did start in the Fifties. The Brown group actually didn't peak until the Eighties, I believe because of the resurgence of the long reliever in that era and because the talent assigned to the role when it was first used extensively was not the best on the staff but rather the tail-end staffers. Now here are the top 10 (or so) for each archetype: Clint Brown Type (Clint Brown was 219th with 1.43 Rwins)
Elroy Face Type (Face was 114th with 2.41):
Hoyt Wilhelm Type:
Bruce Sutter Type:
Robb Nen Type:
Middle Relievers:
Now here are the top relievers ranked by Relief Wins per year:
Here are the Worst based on per-year RWins:
Note that most are one-year tryouts. The worst with at least 5 years was Dan McGinn: -5.92 Total RWin over five years for a -1.18 yearly average. The worst 10-year vet is the great Pat Mahomes: -7.15 RWin over 10 years or -0.715 per year. Next, here are the top relievers broken down by handedness. First, lefties:
Now righties:
Finally, here are the top performers by decade:
Hall PassI now feel comfortable in comparing relievers based on our new measure, Relief Wins. But how do they compare against starters in general and Hall of Fame pitchers specifically? How many relievers deserve to be in the Hall? Do any? To determine this, I used James' conversion for prevented runs to wins above average for starting pitchers. Here is a list of all of the Hall of Fame pitchers with the Starting and Relief Wins above average (including Monty Ward but not Babe Ruth). Also, listed are the average for all HoF pitchers, the standard deviation from the average, and the range that these two values prescribe:
Therefore, a reliever with 33.31 RWins is "better" than the Hall average pitcher. And pitchers within 15.37 and 51.25 are in theory within the Hall range. Here are a few other Hall of Famers who went in as position players:
And here are some pitchers who are not in the Hall of Fame (Spalding and Griffith are in as executives):
Now, here are the Relief and Starter Wins for our relievers (minimum of 5 Relief Wins and 10 total wins above average):
So using our Hall of Fame average of 33.31, two pitchers not in the Hall would qualify. There are Goose Gossage and Mariano Rivera. I hope these results help solidify Gossage's standing as a solid Hall of Fame candidate. Rivera is still one year shy of qualifying for the Hall and, as with any active player at his peak, may suffer a decline that will negatively impact his Relief Win totals. However, he looks like he'll be a solid pick upon retirement. There are 50 others that meet our Hall "range" (15.37 to 51.25 RWins). Of those I expect no more than a handful to get serious support (Sutter and Lee Smith and perhaps Franco and Smoltz). However, it is gratifying to see luminaries from relief pitching history "qualifying": Dan Quisenberry, John Hiller, Eddie Rommel, Ellis Kinder, Firpo Marberry, Mike Marshall, Ron Perranoski, and Stu Miller I'd like to look at one more aspect of this Hall of Fame argument. Given that Relief pitching perhaps more than any other role is dependent upon trailblazers, men who establish the manner in which the reliever is used and whom others then lemming-like follow, then establishing who those pathfinders have been may be illuminating. Of course, this is highly subjective, but given that I have looked at relief pitching in every era throughout major-league history, I feel that I can provide a list of the more salient suspects. I'll split the list between the pitchers themselves and the men behind the scenes (mostly managers and Jerome Holtzman, inventor of the save stat) who made a difference: First, here are the pitchers that have been the most influential: 1. Bruce Sutter Now the non-players: 1. John McGraw I think that the trailblazer argument could be used to help Sutter's and maybe even Firpo Marberry's Hall of Fame case. If the history of relief pitcher were reduced to one man, it would be Sutter. Is that reason enough to put him in the Hall? Probably not, but couple that with his very good career, and he has a good case. To be completed...
Permalink |
No comments.
"Hall's of Relief"—Final Analysis (Really), Pt. II
2004-02-18 01:08
Previous entries:The 1870s, '80s, and '90s The 1900s and '10s The 1920s, '30s, and '40s The 1950s The 1960s The 1970s The 1980s The 1990s and 2000s 2003 Notes: Part I & II Final Analysis: I, II, III, and IV. And The Winners Are…OK, so now that we have gotten the preliminaries out of the way, we are ready to get to the results. Here are the best reliever seasons of all-time based on relief wins. Also included are reliever types and Pitching Win Shares:
Now here are the worst:
Now here are the career best 100 relievers by relief wins, this time with the number of seasons per reliever type:
Which brings us to career worst relievers:
And just for fun, here are some other pitchers who either were mentioned throughout the study or are of some significance (including many of the best and worst active players and all of the Hall of Famers I could find. Note Bump Hadley is here because he is the most average reliever of all-time):
Validating the RWin Data ModelWell, now that we have results, are they meaningful? OK, Hoyt Wilhelm may be the best reliever of all time, but could Jim Kern's 1979 season be the best for a reliever ever? And even if it was the best ever, could have been that much better than any other? There are some important things to remember. First, technically we are not looking for the best season but the most valuable one. And we define value by the pitcher who prevents the most runs above average weighted by the way the way that the pitcher was used. One could argue that a more recent reliever, say Gagne in 2003, had the best season ever, but given the way that he was used his potential value was lessened. Second, Kern's 1979 was pretty darn good after all. He pitched 71 games, all in relief, had a 13-5 record, 29 saves, and a 1.57 ERA, 165% better than the adjusted league average. He threw 143 innings, allowed 99 hits, struck out 136, and walked 62. Total Baseball rated him the best player in baseball that year by Total Player Rating. He ended up fourth in the AL Cy Young vote in '79. Besides, a number of stellar but forgotten seasons bubbled to the top of the list: Ellis Kinder's 1953 year, John Hiller in 1973, Mark Eichhorn in 1986, Dan Quisenberry's 1983, Radatz's first few years, etc. And the seasons that show up among the worst are truly awful and the ones that pitched the most innings show up as even more awful. One other thing to keep in mind is that we are pigeonholing these seasons into a handful of reliever roles and assigning value accordingly. Kern fell into the Wilhelm category, the one for which the pitching runs saved are valued the most. If the criteria I used had been tweaked slightly he could have been placed in the Sutter group and his relief wins would have been slightly lower. But whoever would replace Kern at the top if we tweaked the criteria a bit would be someone already in the upper echelon. I think that the system is not perfect but that it points us in the right direction. I feel that it hits the mark better than pitching Win Shares for relievers. I prefer that the average pitcher is at zero. In Win Shares a pitcher is sometimes given a fraction of a Win Shares just for showing up a pitching a few innings. So you end up with poor pitchers receiving either zero Win Shares or a half a Win Share or so. There's no way to differentiate a bad season from a truly awful one. With RWin the worse a pitcher is (i.e., has a high ERA in a lot of innings or gives up a ridiculous amount of earned runs in a smaller set of innings), the worse his final scored ends up being. My one problem is with middle relievers. I feel that this system overvalues them, but I'm not sure how to rectify it. My brokered solution of splitting the difference between a Clint Brown type pitcher and a starting pitcher in order to calculate middle reliever relief wins does tend to lower the ratio of prevented runs-to-relief wins down in Hoyt Wilhelm territory when the innings pitched per game gets very low (well under one). For the most part the pitchers with these totals end up pitching so few innings that the relief wins are negligible. However, there are a handful (e.g., Quantrill in 2003, 4.62 Rwins with a 4.50 PR-to-RWin ratio) that get a bit out of hand. To be completed…
Permalink |
No comments.
A-Rod To Third Or 6 To 5 (By the Bronx Transportation Authority)
2004-02-17 17:14
I don't want to divert my attention from the quest for a Holy Grail in relief pitching (especially to answer questions like "What is the air-speed velocity of an unladen swallow?"), but I saw Tim Kurkjian spout off about A-Rod's move to third would at least initially hurt his defense. He claimed to have performed some sort of statistical analysis on the subject, a chuckle-fest in and of itself. He cited the fact that the ball gets to third basemen more quickly as the crux of the problem. They also apparently take a goodly number of balls off the chest according to the redoubtable Sir Tim, who must have come straight from a replay of Corbin Berson's pathetic attempt to play the hot corner in Major League First, third base has changed more than any other position over time. James devoted, I believe, six different formulae to the study of third basemen throughout baseball history in Win Shares. Second, I think most will tell you that short is a more physically grueling position. Third, if the Yankees were so concerned about D, they would put Rodriguez at short and install Jeter, a player incapable of going to his right anyway, at third. The last two of those points get me to thinking. Sure, A-Rod will need to adjust defensively as he learns third. Kurkjian pointed to Rico Petrocelli, who was also moved to third at the age of 28, but looking at Petrocelli's defensive stats, it looks like he adjusted quickly. His range factor at third in his first year was at the level he would attain over the rest of his career. Sure, it was not as good as compared to the league average as his shortstop numbers (though it was well above average), but it still contradicts Kurkjian's claim that A-Rod's D would suffer for a short while and then he would readjust to his previous level. Of course, a study of short-to-third position changes and the subsequent defensive issues of the guinea pigs in such a study would be quite involved. It would have to factor in the player's age, a means to standardize the stats over time, a means to translate stats from one position to the other over time, etc. Therefore, I won't attempt it. I will, however, take a look at how just such a move affects a player's offensive stats. Does playing a less physical position aid one's offense? Or does learning a new position at the major-league level negatively impact one's offense? (Keep in mind that my favorite third-sacker is Mike Schmidt a man that played shortstop throughout his tenure at Ohio University and who was playing third base at the major league level within one season.) OK, first who qualifies for such a study? I searched for all shortstops who played at least 80 games (half a modern season) at that position one year and then played at least 80 at third base in the next. Forty-two men qualified (actually, just 38 since Buck Weaver, Dave Chalk, Woody English, and Eddie Kasko made the list twice). Then I calculated their percentage stats and compared their shortstop season to their third base season. I found that the average player's slugging and OPS went up 3%, his on-base percentage went up 2%, and his batting average went up 1%. The average age of just such a player was 28 (in his third-base season), so some of the increases may be explained by maturity. However, if A-Rod experiences the same sort of improvement (and ignoring the change of home park) his numbers for 2004 would be: .302 BA, .404 OBP, .620 Slug, and 1.022 OPS. Do I expect Rodriguez's offense to better his 2003 MVP numbers? Not really. Yankee Stadium is a much harder park to hit in than Arlington, especially for right-handers. However, there is little evidence that the move to third will affect his offense, and that after all was the reason the Yankees grabbed him. Of course, A-Rod's move to third makes him less of a "special" player. If his stay at the hot corner is a long one, his claim on the title of greatest shortstop of all-time becomes tenuous at best. And the pool at third is more cramped. He'll still be the best third baseman but not by the mile that he led at short. 'Tis a pity. Anyway, here are the players who moved from short to third and their stats (with A-Rod's "projection" at the bottom). The first set of stats is for year one (short) and the second for year two (third):
Permalink |
No comments.
Friggin' A-Rod
2004-02-16 02:19
Just when you thought this offseason couldn't get any wackier, the Yankees acquire Alex Rodriguez to replace ardent cager Aaron Boone at third. In exchange the Rangers get mercurial second-sacker Alfonso Soriano and a player to be named. The Rangers also have agreed to pay $67 M of the $179 M still on A-Rod's contract. The deal was approved by the players union (unlike the last A-Rod deal) and just awaits Bud Selig's slatternly John Hancock. Though the Red Sox were not involved in the deal, the Yanks acquiring A-Rod is a direct slap in the face to the Red Sox management and the team as a whole. The Yanks and Sox have been parrying and dodging each other all offseason, each trying to get the upper hand in the division. The Sox, you'll recall, attempted to acquire A-Rod not too long ago and Icarus-like failed miserably. In the process they further alienated two of their bigger stars, Manny Ramirez and Nomar Garciaparra, who were involved in the deal, and whose involvement was made quite public. The way that the two organizations went about trying to acquire the best player in the league speaks volumes about them. Of course, the Yankees have deep pockets, but the Red Sox have the second-highest payroll in baseball, and even though they are sabermetrically minded, they are no Oakland A's. The Yanks decided to make the deal and struck quickly. The Red Sox perseverated, ticked off arguably their two best position players, and finally let the deal slip through their fingers over reportedly $20 M, or $3 M a year over the course of A-Rod's contract. The Yankees do create another hole at second by trading Soriano, and the replacements are the not-too-savory Enrique "Louie Sojo Jr." Wilson, Miguel "Don't call me Joel" Cairo, or a transplanted Erick Almonte. Recently acquired Mike Lamb and Tyler Houston won't help much at second. But it doesn't matter. The Yankees may have more holes in the depth chart for their position players. But it doesn't matter. The Yanks have displayed again that they are the superior organization. For all of the Red Sox machinations, they are still the second-place organization. I'm not saying that the Yanks have locked up the division in acquiring A-Rod. That remains to be seen. The Yankees have, however, responded to the Red Sox challenge in their best Apollo Creed: "I refuse the challenge because it is no challenge, but I'll be glad to beat up on Boston again" (actually "Balboa again", but you get my drift). And seemingly none of the other issues will come into play because the Yankees will go about resolving them. I know that some will bemoan the hegemony of the Yankee dollar (probably the same lot who hypocritically lamented the failure of the Red Sox to acquire A-Rod). However, you have to admire an organization that gets things done (at least I, as a Phils fan, have to). So now it's the Red Sox's move. I'll tell you one thing: this is a good thing for baseball. Baseball needs its best players on its best teams on the biggest stages. And the Yankees and Red Sox battling for the division this year might be the best pennant race since the Braves-Giants in '93. Let me tell you, that's a good thing for baseball.
Permalink |
No comments.
Glorious Ritter Dead
2004-02-16 01:39
Lawrence Ritter, the man responsible for the greatest baseball book ever created (in my humble opinion), The Glory of Their Times, has died. If you have not read the book, you owe it to yourself as a baseball fan to do so. And once you have read it, track down the collection of recordings from the interviews that informed the book. They were released on CD a few years back. I can proudly say that I have both in my baseball collection. I can think of no better tribute to this man than experiencing his masterwork, whether for the first or the hundredth time.
Permalink |
No comments.
GM Not Surprised
2004-02-11 23:45
ESPN reports that the Dodgers will name Oakland Assistant GM Paul DePodesta their GM for 2004. Given that the combination of Dodger Stadium, the Dodger's offense, the Dodger pitching staff, and the rest of the world's offense make for a sabermetrician's dream, this should be interesting. This little piece that my friend Murray sent me tells me that DePodesta definitely does not have a problem with his confidence. Now we'll have to see what he does as the number one guy.
Permalink |
No comments.
Around the Horn
2004-02-11 00:35
I will be updating the last installment in the Hall's of Relief series. Keep checking the post below. Our old friend John "Only Baseball Matters" Perricone is back and better than ever with a new site. Go check him out. The Baseball Crank has an Established Win Shares Level Report for the AL East. I have added a slew of links, not the least of which is BaseballOutsider.com.
Permalink |
No comments.
SABR Rattling
2004-02-09 12:27
This weekend I attended the annual meeting for the New York regional chapter of the Society for American Baseball Research (the Casey Stengel chapter) with my friend Murray. Although I have been in SABR for eleven or twelve years, it was my time attending a SABR event; Murray is a veteran of many a New York regional meeting. The first thing that I found odd was that the funkadelified Gershwin hotel, where the event was held, on 27th Street near Madison Square Park, the original site of Madison Square Garden, four or five iterations ago. Not that anyone else would find this interesting, but I used to work in the building directly across the alley-like 27th St a couple of years ago. It was at an eCommerce company before the bubble burst, back when the Flat Iron district of New York was dubbed Silicon Alley (and even bore a sign with the embarrassing moniker, as Murray reminded me, though neither of us thinks it's still there now). It's an odd area of New York, where third-world street vendors, who Guliani never bothered to clear out, are juxtaposed (within a block or two) next to yuppified gentrification of bars, restaurants, and overpriced housing. Anyway, I used the back entrance to my building to walk northerly to Penn Station and would exit facing the Gershwin and it's faux papier-mache flames surrounding the outside lights and befuddled foreign tourists guarding the entrance, and I always wondered what it looked like inside. So I got to find out. (Also, I was watching the first season of "Curb Your Enthusiasm" on DVD on Sunday and noticed that the hotel that appears in the HBO special that spawned the series was the Millennium Hotel on 44th that I used as the short cut to my pizza place on 43rd when I worked in Times Square. It's the hotel in the scene in which Larry's manager tries to get HBO to pay for his $270+ X-rated movie bill, much to Larry's chagrin.) Anyway, enough of hotels: I have to say before I inevitably start running down the meeting that everyone was very nice and extremely exuberant about the game in general and the meeting itself. It had that endearingly personalized charm that tells you that the organizers truly care about the event. The main organizer mentioned that many SABR members, some of which did not even attend, sent in checks to cover the cost of the event and to allow members who were without the means to attend. That's just a very nice, thoughtful thing for them to do. That compensated for the cramped accommodations, the not-too-appetizing spread, the noise and fumes of construction seemingly in the next room of the hotel, and the technical glitches (the final presenter, a preview of a documentary on Cooperstown, had to be canceled because her DVD wouldn't play, but no one knew how to debug the problem, let alone even open Task Manager, and there was no backup media, all of which is odd given that baseball research, which is in the title of SABR, tends to involve computers in some way, shape, or form). Also, the guest speaker, Tom Keegan of ESPN, had some good stories about Ernie Harwell, Denny McLain, Pete Rose, and others. He was affable, seemed happy to be there, and took many questions even if some were, well, I'll get to them later. Octogenarian Tom Knight, the professor emeritus of the chapter, who sort of emceed the event had some funny stories until he ran out of steam, understandably, about two-thirds of the way through the all-day meeting. Hearing Phil Lowry, the author of the glorious Green Cathedrals, speak was a pleasure. He discussed his research on games that either ended the latest or lasted the longest, a topic that I initially thought would be dry, but that he presented with interesting angles on each game. One presenter played some highly entertaining, old recordings of Bill Stern, the once extremely popular radio broadcaster, who would relay quasi-factual stories in the most outlandish and definitely apocryphal ways. One story claimed that the National League was formed to prevent a group of convicted criminals from playing baseball. Another claimed that Harvard Eddie Grant (as opposed to the Electric Avenue one), the only major-leaguer to die in World War I, threw himself on a hand grenade to save a young major named Harry S. Truman. Great stuff. There was also a presentation on Tug McGraw, which the presenter began by slapping a mitt on his thigh a la Mssr. McGraw. Being a Phils fan though I noticed that the team for which McGraw played as long as the Mets didn't merit a mention. I turned to Murray and said that it was the Ken Burns documentary version of Tug's career (meaning that it was completely NY-based), but I guess in fairness the meeting was in the Big Apple and the presenter was cut off about halfway through. All of the other presenters seemed excited to participate, which made up for sometimes less than intoxicating presentations. One thing that I did find troubling was the level of the discourse. Old saws that would shame a Baseball Primer poster were dusted off and presented as new ideas, squeezed into pointed questions that constituted speechifying sans soap box rather than a true exchange of ideas. Opinions concerning the non-issue of Janet Jackson's apparel choices were repeatedly expressed. Keegan said that the media's desire to grab the 18-25 demographic was dumbing down the coverage, a view that I agree with (not that I necessarily have a problem with it as long as I can deride it), but odd coming from a man working for the network (or at least the radio division of the network) that brought you the execrable "Pardon the Interruption" and "Rome Is Burning." Then he started to comment on the Super Bowl's choice of Kid Rock, saying that he listened to Bob Dylan on the way over to the hotel (huh?). I don't know why anyone at the meeting would bother commenting on an inferior sport that caters to the basest emotions at all times. Not that there's anything wrong with that, but why be surprised by the crappy, allegedly indecent halftime show? Someone else said that the Super Bowl brouhaha or bra-ha-ha displayed the good PR that baseball enjoys as opposed to football (Huh? On what planet?) One woman went on a tirade-cum-question about how Mike Ditka's Levitra™ commercials somehow led up to the Super Bowl halftime show. Keegan had difficulty responding or even finding a question in her rantings and ended up defending men with certain medical shortcomings. Other comments that literally had me squirming in my seat were the ridiculousness of pitch counts, Greg Maddux's wussiness in pitching only "5 or 6 innings" a start in comparison to Warren Spahn's ability to complete games, and the necessity to add an asterisk to any recent home run records due to alleged steroid use (Keegan agreed with the epithet "juice monkeys" for today's players, a comment that had our mouths agape and eyebrows raised to our hairlines). Pete Rose was referenced repeatedly and one brazen questioner even asked Keegan if Rose's reinstatement (which will probably never be extended) would open the door for Shoeless Joe's revivification, at least in the baseball universe. This was a question that had me shaking my head and Murray and I almost heckling the audience member. Not only are Jackson and Rose barred from the game for different reasons with different evidence, unlike Rose, Jackson had been eligible to the Hall vote for many years and had even received votes but was found wanting by the writers and veterans alike. Does anyone remember that Jackson too denied the story? One of the few good questions was from Murray when he asked Keegan what websites he reads. The response explained a good bit about the media being dumbed down. He cited a great "new" site, Retrosheet, a great site to reference but it seemed like dilettante interest rather than a true commitment to the site. Anyway, the other sites he reportedly read were all ESPN writers without even a Rob Neyer in the mix. It's no wonder that the blogs and independent websites have taken such a hold in baseball reportage in the last few years. The members of the media just sit around reading what the guy in the next office said about so-and-so. When most reporting is handled by the reportedly five major conglomerates that own almost all newspaper, TV, and internet reporting, who would be surprised if they are more concerned about keeping up with the Joneses rather than coming up with something thought-provoking and original. At the end Murray summed up the inherent problem in SABR well. He said, that it stems from the tension between "rigorous, research-intensive SABR" and the one peopled with "diehard fan types who know more than average but who wouldn't know a run created from a rundown". My explanation was simply that there were too many Met fans—to paraphrase Eddie Murphy, I kid the Met fans because they Met fans. Anyway, I come to praise not bury the SABR meeting, though that may be hard to tell. I would go again, but next time I would be a little bit wiser as to what to expect. Well, who knows after this, they may not want me back.
Permalink |
No comments.
"Hall's of Relief"—Final Analysis (Really)
2004-02-08 01:24
Previous entries:The 1870s, '80s, and '90s The 1900s and '10s The 1920s, '30s, and '40s The 1950s The 1960s The 1970s The 1980s The 1990s and 2000s 2003 Notes: Part I & II Final Analysis: I, II, III, and IV. Evaluation is creation: hear it, you creators! Evaluating is itself the most valuable treasure of all that we value. It is only through evaluation that value exists: and without evaluation the nut of existence would be hollow. Hear it, you creators! —Friedrich "Fat Freddie" Nietzsche For something that was named for a cheesy, Eighties cough drops commercial and was inspired by an even cheesier Hal Bock article, I think this series has borne a good bit of fruit. After reviewing each decade and becoming ever increasingly more verbose and table-laden in the process—how could he have devoted two sections to 2003 alone?—I feel that it's finally time to wrap the whole enchilada up. To that end I would like to take a stab at evaluating all relievers across all eras. First, though, I have to comment on the inherent limitations in just such a study given the quality of the historical data. This is due to the fact that until relatively recently the majority of pitchers started and relieved. It wasn't until 1978 that greater than half the pitchers in the majors were specialists, pure starters or pure relievers, as opposed to swingmen. Swingmen now comprise only about a quarter of all pitchers. Even so, the apogee for starter/relievers reached 81.52% of all pitchers in 1933. The nadir for pure starters was 1.53% in 1932 and for pure relievers was 4.42% in 1898 (that is, after free substitution of players was allowed in 1891). The average all-time is still 55.13% swingmen, 14.37% starters, and 30.49% relievers (In 2003, it was 27.94%, 21.90%, and 50.16%, respectively). All this means that if you want to evaluate relief pitching since Bruce Sutter, then aside from an odd Byung-Hyun Kim being yo-yoed in and out of the pen, the data are pretty much available. Just compensate for era and go. Dating back to the Fifties, most star relievers rarely if ever started a game. Though, given that middle relief had its own evolution that followed the closer or ace reliever's, they were anything but pure relievers until even more recently. However, if someone wants to study the history of relief pitching in toto as we are attempting here, he inevitably comes face to face with the limitations of the statistical record. Baseball has never officially separated a pitcher's statistics as a starter from those as a reliever. Sure, once can look on ESPN.com or MLB.com now and get statistical breakdowns for current players, but that does diddly for someone studying Hoyt Wilhelm, Firpo Marberry, or Doc Crandall. This is borne of the reliever role being less a position and more an ever-evolving strategic construct. So until someone like Retrosheet has divided this Red Sea of data into starting and relieving statistics, we are left with guesswork for a good chunk of the data record. This is the sort of thing that Major League Baseball should be rectifying on its own dime, but Bud and his boys are more preoccupied with determining home field advantage in the World Series based on steroid use. The only solution that made sense to me was to prorate the pitching stats based on the average innings pitched per game for the given league in the given year. I know that this is an approximation based on a small sample. Given that there has never been a full season since free substitution of players started in 1891 in which baseball had more pure starters then swingmen, this does induce a higher potential for error than I would like, but hey, what are you going to do. (By the way, the strike-shortened 1994 season did have more pure starters, 27.51%, than swingmen, 24.95%.) Actually, it got a bit more complicated than that. I ended up prorating each swingman's stats based on a ratio of the prorated relief innings pitched divided by the sum of the prorated relief innings and the prorated starter innings all multiplied by the actual innings pitched. The prorated relief innings pitched are based on the league average in the given year of the innings pitched per relief appearance for all pure relievers multiplied by the given pitcher's relief appearances. The prorated starter innings pitched are based on the league average in the given year of the innings pitched per game started for all pure starters multiplied by the given pitcher's games started. My thought was that the numbers would be closer to the actual if I weighed the two against each other. Finally, I made sure that the result was within the minimum and maximum asymptotes for the pitcher given his starting and relief stats. The max innings pitched as a relief are constrained by a) the total innings pitched (he couldn't exceed that as a reliever) and b) by the complete games he pitched. I assigned an eight-inning minimum to each complete game—I know it could be and probably was more. So the relief innings pitched could not exceed the total innings pitched minus eight innings per each complete game. As far as a min, I initially assigned a third of an inning per relief appearance but then realized that this was erroneous. There really is no minimum. A pitcher could make a dozen relief appearances and not get anyone out. He would then have zero innings pitched. I know that this would never happen in practice, but remember that we are talking about an absolute minimum. Anyway, I think only a handful butted up against the max (and even fewer hit the .1 IP min before I removed it). Here are some formulae if you are into those things. Those who are uninterested, please skip ahead: RIPAvg = Sum(IP)/Sum(G) (i.e., Relief IP Avg for yr and league. Note, this is just for pitchers who did not start a game (GS=0)) SIPAvg = Sum(IP)/Sum(G) (i.e., Starter IP Avg for yr and league. Note, this is just for pitchers who only started games (GS=GP)) EstRIP = (GP-GS) * RIPAvg (i.e., the Estimated Reliever IP) EstSIP = GS * SIPAvg (i.e., the Estimated Starter IP) BestEst = IP * EstRIP/(EstRIP+EstSIP) (i.e., the Best Estimate) MaxIP = IP - CG * 8 FinalEst = MaxIP (If BestEst > MaxIP) otherwise FinalEst = BestEst (i.e., the Final Estimate) For the sake of comparison, here the 2003 swingmen with the estimated relief innings (min. of 10 Est IP) and the actual (from ESPN.com):
Note that as the number of innings shrinks, the accuracy of the estimate becomes more of an issue. For pitchers with at least 50 estimated innings, the estimate is about 10%, or 5 innings, off; for 30 innings, about 20%; and for 10 innings, the estimate is more than a third off of the actual. This is due to a few especially long relief appearances or a few especially short starts being able to sway the smaller samples more easily. Given that the pitchers that we will be evaluating as among the best ever should have pitched over 50 innings in relief, I think a 10% error (5 innings) is the best we can do and therefore, has to be accepted. Of course, there is no way to know if historical data is as accurate as the 2003 data given that we don't have that breakdown to begin with. OK, if we accept that these numbers are as accurate as we can get, let's move on. I then took the estimated relief innings pitched (i.e., FinalEst above) and prorated the pitcher's other stats by the estimated relief innings divided by total innings. So now that we have all of the data for relievers, we need a way to compare one to another that crosses all eras. Well, I'm going to be lazy and rely on other people's research. I first started devising a statistical method to evaluate all relievers after reading Bill James' "Valuing Relievers" in his The New Historical Baseball Abstract. He has a section in which he researches these two questions: 1. What is the value of one run saved by a relief ace, as contrasted with one run saved by a starting pitcher? 2. Is the modern style of using the ace reliever, which involves using him almost exclusively in "save" situations, the optimal usage pattern? James findings are that a run saved by a reliever has far more significance than one saved by a starter. He uses archetypes to answer the second question: Clint Brown for the mid-Thirties to mid-Fifties and an average of 58 games pitched, 106 innings, 10 saves, and 5.88 runs saved equal to one win. Elroy Face: mid-Fifties to 1962, 59 G, 96 IP, 15 Sv, 5.14 R equal to a win. Hoyt Wilhelm: 1963-78, 72 G, 128 IP, 24 Sv, 4.47 R to a win. Bruce Sutter: 1978-89, 61 G, 111 IP, 38 Sv, 4.73 R per win. Robb Nen: 1990 to present, 77 G, 91 IP, 41 Sv, 4.64 R per win. And one win per eight to nine runs saved for a generic starting pitcher. James found that the Wilhelm pattern was optimal. He expands on these findings to develop the theory of optimal reliever use that I outlined in the Gagne section. Oddly, James does not seem to use these findings to develop his Win Share formula for relief pitchers. OK, so we can relate runs saved to wins, but how do we determine a relief pitcher's runs saved? For that I would like to use a formula from Total Baseball that has been modified by Baseball Prospectus. That formula is Pitching Runs (or, as BP calls it, Adjusted Pitching Runs). It calculates the (earned) runs prevented over the league average. The difference between the expected earned runs for the number of innings pitched minus the actual earned runs is the basis for PR. The original formula was innings pitched multiplied by the league average ERA divided by nine minus the pitcher's earned runs allowed (PR = IP * Lg avg ERA / 9 – ER). BP updated the formula to use runs instead of earned runs and the pitcher's homepark park factor (APR = Lg avg Run Avg * IP – R/Park factor). I am going to use a combination of the two: I will keep earned runs from TB but will add the park factor from BP (actually, the pitcher's park factor from BaseballReference.com via Sean Lahman's database). I am more concerned about the earned runs than the runs allowed. I added in the park factor to adjust the earned runs for the pitcher's home park. My final formula is innings pitched times the league average earned run average divided by nine minus earned runs divided by the pitcher ballpark factor. Here are all the relievers who scored 25 or more Pitching Runs all-time. You'll note that there are a good number of middle relievers and early closers (though Gagne is 16th):
Now here are the worst all-time. There are some real old-timers in this mix:
Dick Welteroth was truly awful: 2-5, 52 GP (2nd in the AL), 2 GS, 25 GF (3rd in the AL), 2 saves, 95.1 IP, 107 H, 83 ER, 89 BB, 37 K (a 0.42 K:BB ratio), and 7.36 ERA (42% worse the park-adjusted league average). He was 21 and only pitched 6 more innings in the majors. Just for fun, here are the top 100 relievers by career PR:
I had to include it because I thought it was quite a list. Just seeing names like John Hiller, Ron Perranoski, Ellis Kinder, Stu Miller, Lindy McDaniel, Roy Face, Frank Linzy, Bill Henry, Gerry Staley, Bobby Shantz, and Walter Johnson peppered among the modern closers in a statistical comparison whets my appetite for the adjusted comparison. Remember that James' findings show that modern closers' runs prevented are worth less in terms of wins than the ones for ace reliever of the past. So I expect the Wilhelmites to be buoyed even more by the final analysis. Now here are the all-time worst in career PR:
Jerry Johnson pitched in the Seventies. Aside from a good year in 1971 with the Giants (12-9, 2.97 ERA—14 % better than the park-adjusted league average, 18 saves, 67 GP all in relief, 109 IP), he was pretty awful for 10 seasons mostly in relief. Now, we have to divide them under Bill James' reliever archetypes and assign wins to the runs prevented. That sounds simple. Well, assigning individuals to roles turned out to be more difficult than I at first thought. The difficulty stems from the fact that the role definitions are based on use, which does not necessarily translate into the statistical record as it currently is configured. I had intended to use a statistical method called multiple (linear) regression. Those of you whose eyes just glazed over can feel free to skip to the results. The rest, follow me. However, this method assumes that the result (dependent variable or here, pitching runs saved per win) has a linear relationship with its component variables (independent variables, which I'll define soon). Given that a reliever's role is defined best statistically by the innings he pitches per appearance (IP/G) and the number of saves per appearance (Sv/G), I chose those as the variables. But when you graph these variables against James' pitching runs saved per win, you get anything but a line. In fact the two curves defined by the variables double back on themselves as the Sutter and Nen roles result in a lower score from James. I could have assumed that there had been some error induced and forced a line on the data, but 1) that would be ignoring the results of James' study and 2) it would make the Nen type the de facto best. Here is a table with the James roles and the ratios for those roles (Note that 1) the type will be used as a shortcut throughout the study and 2) the starter stats are based on all-time averages for pure starters that I derived myself):
So next I tried grouping relievers under the various types by similarity to the average type that James defined. I thought that using the season ranges that he provided would have been too facile and would have ignored the fact that reliever use does transcend era to a certain degree. But grouping pitchers by how much their stats resemble Clint Brown's or Robb Nen's had its own inherent problems. What does "similar" mean after all? And could a pitcher be "similar" to two of our archetypes. I also had another problem. What do I do with middle relievers, some of whom scored very high in pitching runs? They didn't even resemble Brown, and lumping them under starting pitchers would be unfair. So I monkeyed around with the data some more. And then I found a way to divide post-Sutter type relievers from the earlier ones. That was saves per innings pitched. A Sutter type has a Sv/IP of .342. The highest value for the pre-Sutter style relievers was .188. I used .25 as the dividing line. To divide the Sutter types from the Nen ones, the main criterion I used was innings pitched per appearance (IP/G). The Nen relievers were close to 1 IP/G and the Sutter ones approached the IP/G of the previous eras (1.82). I used 1.5 IP/G to separate the two groups. Initially, I set the upper limit for the Sutter group at 2.25, but found that only 100 pitcher years exceeded this value and that the group most resembled Sutter. So I ended up including them with the Sutter group. The Sutter group ended up with 502 pitcher seasons, and the Nen with 814. The earlier groups are differentiated by saves per game (Sv/G). I set the groups up like this: starters/middle relievers less than .1 Sv/G, Brown between .1 and .2 (inclusive) Sv/G, Face between .2 and .3 Sv/G, and Wilhelm greater than .3 Sv/G. (Again I limited the Wilhelm group to .4 Sv/G at the high end but found that the group that exceeded this figure best fit with the Wilhelmites.) The resulting pitcher season count per group was: middle reliever 20, 883, Brown 3481, Face 1362, and Wilhelm 1075. Next, I tackled the conversion for middle reliever pitching runs saved to wins (By the way, the middle reliever type is "M"). They belonged in neither the starting pitcher camp nor the Clint Brown camp. Browns' denizens averaged 1.83 IP/G, and the starters averaged 6.87. My solution was to draw a line literally between the two and project each middle reliever along that line based on his innings pitched per appearance. The resulting formula is not for the faint of heart: R Win = PR/((8.5-5.88)/(6.87-1.83)*IP/G + (5.88-1.83)). Here is a final breakdown by era of each of the various types:
Note that each type has outliers outside of their eras, but that each does peak in the appropriate era. Note especially how the Sutter type (D) has died out completely in favor of the Nen type (E). Lastly, middle relievers ("M") began to die out with each new reliever innovation but have been growing steadily since the Seventies as bullpens diversified and more pitchers per game were used. To be completed…
Permalink |
No comments.
Primey Time
2004-02-06 21:40
Vote for the man who promises least; he’ll be the least disappointing. —"Marvin" Bernard Baruch Mike's Baseball Rants is a finalist in Baseball Primer's Primey Awards for the following categories: #5 Best Internet Baseball Article: "Hall's of Relief" Series Please vote early and often—pretend it's a mayorial election in Chicago. Vote here. I should run away with the Best Internet Baseball Article. My competition is Rob Neyer, Michael Lewis, and Dan Le Batard. Yeah, I never heard of 'em either.
Permalink |
No comments.
“Hall’s of Relief”—Final Analysis, Part II
2004-02-05 00:26
Previous entries:The 1870s, ‘80s, and ‘90s The 1900s and ‘10s The 1920s, ‘30s, and ‘40s The 1950s The 1960s The 1970s The 1980s The 1990s and 2000s 2003 Notes: Part I & II To Come: Final Analysis: I, II, III, and IV. Bull by CommitteeA committee is an animal with four back legs. —John "Max" le Carre 2003 became a litmus test for the bullpen or closer by committee. In the offseason the Red Sox hired Bill James and allowed their itinerant closer, Ugueth Urbina, leave via the free agent route. Boston then adopted James’ theories with regards to relief pitching and voila, it was all around the hot stove circuit that the Sox were employing a bullpen by committee. Of course, the source for all the Jamesian theorizing, an article titled “Valuing Relievers” in The New Historical Baseball Abstract, does not mention anything about a bullpen by committee or a closer by committee. Actually, James doesn’t even user the term closer. He prefers “ace reliever”. And the point of the article is to arrive at a means to value and to maximize the ace reliever’s contribution. His observations I have already documented in the Gagne section above. Anyway, James vociferously defended the Red Sox plans as having nothing to do with a bullpen by committee. But the press saw the Red Sox picking up veteran relievers like Chad Fox, Ramiro Mendoza, and Mike Timlin and labeled the Boston bullpen a “committee”. So who was right? Well, the Red Sox did use ten different pitchers, including two in their rotation (Wakefield and Fossum), to save games. Is that unusual? Actually, it hasn't happened since the 1995 Detroit Tigers traded Mike Henneman (1.53 and 18 saves) to Houston for Phil Nevin just after the trade deadline and used nine different relievers not too effectively to finish out the year. Before that it hadn't happened since 1987 when Baltimore and Cleveland both let 10 relievers pick up saves. Here's the complete list (The first to do it was the Tigers in 1909 when every pitcher on the staff but one—he only appeared in two games—picked up a save, posthumously):
By the way, the only team to have more than 10 pitchers record a save since Bruce Sutter revolutionized the closer role was the 1979 Dodgers. Bobby Castillo led them with 7 saves (20.6% of the team's 34 saves) and a 1.11 ERA in just 24.1 innings. Don Sutton (1 save), Jerry Reuss (3), and Bob Welch (5 saves) were even in the mix. Fourteen different Dodgers finished games; eight finished 10 or more games. Now, that's a bullpen by committee. Castillo's seven saves comprised just 20.6% of the Dodgers saves that year. That was the all-time low for a team saves leader…until this year. The Tigers' save co-leaders Franklyn German and Chris Mears recorded just 5 saves each. The Tigers had a woeful 27 saves as a team. Even so, that means that the saves leader for Detroit only registered 18.52% of the team's saves, the lowest ever. Here's the list of all team "closers" who recorded 30% or less of their team's saves:
OK, so back to the Red Sox. They certainly shared the saves around, but does that mean that the constituted a bullpen by committee? What is a bullpen by committee anyway? Well, it seems to me that a bullpen by committee would not only share the saves among many pitchers, but they would share them pretty equally at least among the better relievers. What if we looked at the numbers for the pitchers who finished second and third in team saves. Here is a table of the men who finished second in saves on their respective teams with the highest save totals all-time (co-team leaders are both listed):
Now here are the men who finished third (or fourth) on their teams with the most saves:
In 2003 the Red Sox’s saves leader was mid-season acquisition Byung-Hyun Kim with 16. Next was Brandon Lyon at nine, and then Chad Fox at three. Actually, if you look at the Red Sox game log for 2003, it’s pretty apparent that they were not employing anything like a bullpen by committee. They just had a succession of unsuccessful, putative closers or as James puts it, relief aces. The 2003 season started with a bang for the Red Sox pen. Game one, March 31 at Tampa Bay, the Sox led 4-1 going into the bottom of the ninth. In what was potentially a save situation, Boston turned to Allan Embree. When he relinquished two runs on a Terry Shumpert home run, Chad Fox was summoned. Fox was acting as James’ relief ace, coming in with a one-run lead. It was also a save opportunity. Fox lost the game on a three-run, two-out, walk-off home run by Carl Crawford. And Grady Little started to stray from the relief ace construct that James laid out though Fox remained the closer for a short time. In the next game Bobby Howry was given an 8-6 lead in the bottom of the eighth and he quickly lost it, giving up two runs in one-third inning. The Red Sox did win, 9-8 in the 16th, however, and Brandon Lyon pitched three solid innings to pick up the win. Howry did have a save opportunity (if he had pitched the final two innings and kept the lead). However, he was not acting as James’ relief ace since they are only employed with a one-run lead, in a tie ballgame, or when the team trails by one. Game three Boston lead 7-5 in the eighth. Fox came in to record his first save though this was not technically an opportunity according to James in which to use the relief ace. Game four was a blowout and Fox rested. The Sox then went to Baltimore and won a one-run game, 8-7. However, there was no save or relief ace opportunity because Boston led 8-1 going into the bottom of the seventh and 8-3 going into the bottom of the ninth. Ramiro Mendoza gave up four runs in the ninth. Again Fox was rested. The wheels started to come off the relief ace concept in game 6. Boston and Baltimore were tied, 1-1, as the bottom half of the ninth began. Boston turned to Chad Fox as the relief ace in a non-save opportunity but an ideal opportunity according to James’ relief ace criteria. Fox spelled the always bubbly Pedro Martinez and quickly relinquished a one-out walk to B.J. Surhoff. Conine doubled, and with first open, Gobbons was intentionally passed to load the bases (a strategy probably not advocated by James). Fox went 3-0 to Miguel Batista, then worked a full count, but finally walked in the winning run. Their next save opportunity (or relief ace opportunity) did not come for six games. On April 13, the Red Sox led the Orioles 2-0 at home. Starter Tim Wakefield came in with the two-run lead in the eighth and earned the two-inning save. However, it should be pointed out that this was not a relief ace opportunity. Fox had only been used for one scoreless inning during these six games, in a blowout game apparently to get a little work. The next game, Boston led Tampa 5-1 in the top of the eighth. Ramiro Mendoza quickly allowed two runs and left with no outs, two men on, and a 5-3 lead. Mike Timlin let the Devil Rays toe the game on a Marlon Anderson one-out single, but stayed in the game and earned the win after the Sox scored a run in the ninth. This was technically a relief ace opportunity (after Tampa tied the game). Chad Fox was rested but was not used. It seemed that he remained the closer but that the concept of the relief ace was no more. It seems odd given that ESPN chose to criticize the Jamesian bullpen approach as “Closer by Calamity and Closer by Catastrophe” in the recap of a game in which James theories were not ever employed. Even odder, Fox was used in the next game with Boston trailing 4-2 to lead off the eighth. Fox pitched a scoreless eighth and then earned his first win as the Red Sox scored 4 in the bottom of the eighth. Lyon came in for the save. Fox then pitched a mop ninth three days later in a 7-3 win over Toronto. On April 20, the Red Sox and Jays were tied 5-5 in the top of the eighth at Fenway. Mike Timlin was used for the last two inning and the Red Sox won 6-5. On April 22 in Arlington, Fox was then entrusted with a one-run lead (5-4) with one out in the eighth, after Timlin allowed three runs. He earned his second win with a 1.2 hitless innings. On April 25, Fox earned his third and final save in a Boston uniform, holding a three-run lead with two out in the eighth and two men on. Meanwhile Lyon was being used to finish the blowout games and carried a 1.64 ERA through April 24. On April 27, Boston won a game 6-4 over the Angels in 14 innings. They had led 4-2 in the bottom of the eighth and turned to Lyon in the save non-relief ace opportunity. Lyon gave up a run and Fox was brought in for the ninth. He allowed the tying run in one-third inning, and that was the end of Fox as the Red Sox closer. The Sox had long since abandoned the relief ace concept. On April 30, they had just tied the Royals 2-2 entering the eighth. Ramiro Mendoza who had pitched horribly to that point was left in the game and allowed two runs to score. The two runs scored after Lyon replaced Mendoza. The Sox won with three runs in the bottom of the ninth. On May 1 Brandon Lyon was anointed the official closer with a save, his second, in a 6-5 win over the Royals. Lyon remained the closer pretty much until he handed the job over to Kim in July. Kim remained the closer for the rest of the year aside from a 4-inning save by Casey Fossum, a three-inning save by Bronson Arroyo, and a save by Mike Timlin in relief if Kim in the ninth on September 19. So there you have it. The Red Sox had nothing close to a bullpen/closer by committee. They did follow James’ tenets for a short time but quickly abandoned them. From mid-April on they employed the same strategy as most any other team; they just got poor performance from the closer role. When I think about bullpen by committee, I see Cleveland in 1993, Toronto in 1985, and LA in 1979. The Indians had a good group of relievers (Eric Plunk, Derek Lilliquist, Jeremy Hernandez, Jerry Dipoto, and Bill Wertz). All had a park-adjusted ERA between 20% and 92% better than the league average. None of them amassed more than 15 saves, but the first four had at least 8 each (and career highs for Lilliquist and Plunk). Also, each of the first four finished between 22 and 40 games. The Blue Jays in 1985 had four relievers who recorded between 8 and 14 saves each and finished 19 to 51 games (Bill Caudill, Tom Henke, Jim Acker, and Gary Lavelle). Their top five relievers had park-adjusted ERAs between 31% and 109% better than league average (between 2.03 and 3.32). The '79 Dodgers, I discussed above. They were led in saves by Bobby Castillo (7 with a 1.11 ERA), followed by Dave Patterson (6 with a 5.26 ERA), Bob Welch (5 with a 3.98 and 12 starts), Lerrin LaGrow (4, 3.41 ERA), Jerry Reuss (3 with a 3.54 ERA and 21 starts in 39 games), three others with two saves, and three with one save. Of the 8 pitchers on the staff that started at least 10 games, five appeared as relievers. They may have transcended the bullpen by committee mold and may have anachronistically approached the old John McGraw teams at the start of the twentieth century. McGraw solidified the use of relief ace, but would cannibalize his starters (Joe McGinnity, Christy Mathewson) to accommodate it. Also, Sparky Anderson and his quick hook were highly influential in the history of the bullpen-by-committee approach. In his nine seasons in Cincy, he had five in which the pitcher who finished third in saves amassed at least 7. And only in five seasons did he have a reliever record 20 or more saves, even though the Reds had three 100-win seasons and just one with fewer than 88 wins during his reign. Finally, the 2000 Baltimore Orioles should be a cautionary tale for anyone considering the bullpen by committee route. They started the year with a rotation of Mike Trombley, Buddy Groom, and Mike Timlin. They blew 22 of their first 49 save opportunities. The trio finished the year with 19 total saves and ERAs between 4.12 and 4.89. Finally, rookie Ryan Kohlmeier was given the job. He pitched well (2.39 ERA with 13 saves in 25 games), but fell apart in 2001 (7.30 ERA with 6 saves) and was out of baseball. The problem, as with the Red Sox's original configuration, was that the personnel was not strong enough or deep enough to fill out the entire bullpen and act as the closer as well. Where To, Buddy (Groom)?The excessive increase of anything often causes a reaction in the opposite direction. —Plato "Shrimp" All of this non-standard use of relievers got me to thinking about the state of relief pitching in 2003. There seemed enough clues to indicate that something related to relief pitcher use was afoot around baseball. With Detroit's all-time low save co-leader (by percentage of team saves), Boston's sharing the saves among ten pitchers, and the White Sox's splitting the closer job among three pitchers there seemed to be a change in the way relievers were used. The closer role seems to reflect the economy during the current Bush administration. The elite are excelling (e.g., Gagne, Smoltz), the poor are floundering (the Tigers), and the middle class are getting squeezed (Ugueth Urbina, Armando Benitez, Billy Koch) and everyone is looking for a bargain (Rod Beck, Joe Borowski). Maybe I'm overstating the case. How do we know that there's anything more than the normal cyclical changing of the guard for a number of teams' closer role? Well, here's a table for every year since the save stat became official of the average percentage of the team save leader's save total to the team's total saves, the yearly change in the percentage, and the average save total per "closer":
Note that greatest dropoff in leader-to-team save percentage was last year, the only year with a greater than 10% decrease. This came after a pretty steady increase following Dennis Eckersley's role redefining season in 1988 (the era that I call the post-modern closer era). And it’s not as if the Tigers' closer issues skewed the data. There were a number of teams with low percentages:
Another indication that change is afoot is that the standard deviation from the average leader-to-team save percentage shot up to the highest in thirteen years and the second highest in the save era, especially odd since one would expect the standard deviation to drop as the majors expand (because of the additional teams being averaged):
Here’s one more illustration, expanding a table that I created in the Nineties section. It contains the percent of team save leaders who amassed a certain percentage of the team’s total saves. For example, the 100% column tells you the percentage of all “closers” who registered all of their team’s saves. Note how each bracket is increasing especially into the late Nineties and early 2000s until 2003:
So what's going on? Well, one thing is that teams are dramatically cutting payroll. That makes them question if paying perennially mediocre closers like Ugi Urbina and Bill Koch four million dollars is being fiscally responsible (or if it’s preferable to line the owners’ pockets instead). The impecunious A's seem content to mine for undervalued closers and then let them go when their price tag goes up. They have had four closers in the last five seasons (Billy Taylor, Jason Isringhausen, Koch, and Keith Foulke) and will have a new one in 2004 (Dusty Rhodes?). I think that with the offenses back in obeyance, managers went back full bore to the tried and true closer role, which is if there is a save opportunity in the ninth, bring out the closer. However, as the media and the fans became more sabermetrically informed, they began questioning a strategy that left supposedly the best closer in the pen in the seventh and eighth when the game may be on the line. Oftentimes, once the ninth inning rolled around the save opportunity had already evaporated. So the cresting wave of one-inning closers broke and fell back this last season. Another problem was the quality of some of the closers. They too often, like Greg Brady’s Johnny Bravo, merely fit the suit. Jose Mesa in Philly is a perfect example. He had a few years with high save totals and sub-3.00 ERAs, but given his wildness never seemed too secure on the mound. All of that came back to haunt him in his deplorable 2003 season. Again why pay someone four million dollars to come into a 4-1 game in the ninth and then walk the bases full while striking out the side? So where to next? It seems that 2003 was not a one-season anomaly and rather a shift in reliever usage. Relief pitching strategy seems to change every ten or so years. It’s like they say that there is a war every twenty years or for each generation: baseball’s generations just cycle a bit more quickly. The current usage pattern started with Eckersley’s 1988 and became entrenched around 1990. Perhaps the offensive surge in the mid- to late-Nineties, two rounds of expansion, and/or the expansion of the middle relievers’ roles extended its shelf life. With just a handful of elite closers, teams seem content to muddle through by jury-rigging the closer role. Indeed, many clubs seem to dissemble and re-assemble a bullpen almost every offseason. That’s what a surfeit of free agent pitchers will do for you. I may be wrong and Gagne’s big 2003 season may be the clarion call back to the clearly defined, ninth-inning-only closers, but I doubt it. One thing that would move the process along would be to redefine the outdated save rule by eliminating the automatic three-inning save and the automatic three-run-lead save. They could also make the hold stat official. Why not credit a reliever who holds a lead at an important junction. A hold may be more important to a game than a save, at least under its current configuration. If a closer’s saves and holds were citable in arbitration and free agency cases, then the closers would be more amenable to coming in with a one-run lead in the seventh. One thing I think will probably not be tried again for some time is the bullpen/closer by committee. Even though the Red Sox never really employed it, they gave the bullpen by committee a bad name. A manager would be vilified in the press and by the fans if he chose to use one any time soon. The preferred method now seems to give a series of relievers the closer role on a trial period. If one succeeds, great, ride him until he fails and then get someone else. Finally, here are the appearance and saves leaders for the decade so far:
Here’s an update for pitchers per role for the Aughts including the 2003 season (RA=Relief Appearances; P/G=Pitchers per game; #P=Number of pitchers in total):
Here’s the breakdown for starters:
Relief pitchers:
Swingmen:
Permalink |
No comments.
|
This is my site with my opinions, but I hope that, like Irish Spring, you like it, too.
About the Toaster
Baseball Toaster was unplugged on February 4, 2009. Frozen Toast
Search
Archives
2009 01 2008 10 09 07 06 05 04 03 2007 12 11 10 09 08 07 06 05 04 03 02 01 2006 12 11 10 09 08 07 06 05 04 03 02 01 2005 12 11 10 09 08 07 06 05 04 03 02 01 2004 12 11 10 09 08 07 06 05 04 03 02 01 2003 12 11 10 09 08 07 06 05 04 03 02 01 2002 12 11 10 09 08 07 Links to MBBR
|