Temple Law Logo TU Portal

Topics

Latest News

By Dave Hoffman

Introducing Guest Blogger Stephen Galoob

I’m pleased to introduce guest blogger Stephen Galoob, who lasted visited with us in 2013. Stephen is an assistant professor at the University of Tulsa College of Law. He is a graduate of UVA law school and received his Ph.D. at U.C. Berkeley’s Jurisprudence and Social Policy program.  Stephen teaches criminal law, criminal procedure, legal ethics, and an undergraduate course on moral and legal responsibility.

Stephen’s recent work includes:

Norms, Attitudes, and Compliance, 50 Tulsa Law Review 613 (2015) (with Adam Hill)

Intentions, Compliance, and Fiduciary Obligations, 20 Legal Theory 106 (2014) (with Ethan Leib)

Are Legal Ethics Ethical? A Survey Experiment, 26 Geo. J. Legal Ethics 481 (2013) (with Su Li).

Via Concurring Opinions

View Story

Posted Under :

The Civilizing Effect of Legal Training

The cultural cognition project has a new article out on how motivated cognition interacts with professionalism:

This paper reports the results of a study on whether political predispositions influence judicial decisionmaking. The study was designed to overcome the two principal limitations on existing empirical studies that purport to find such an influence: the use of nonexperimental methods to assess the decisions of actual judges; and the failure to use actual judges in ideologically-biased-reasoning experiments. The study involved a sample of sitting judges (n = 253), who, like members of a general public sample (n = 800), were culturally polarized on climate change, marijuana legalization and other contested issues. When the study subjects were assigned to analyze statutory interpretation problems, however, only the responses of the general-public subjects and not those of the judges varied in patterns that reflected the subjects’ cultural values. The responses of a sample of lawyers (n = 217) were also uninfluenced by their cultural values; the responses of a sample of law students (n = 284), in contrast, displayed a level of cultural bias only modestly less pronounced than that observed in the general-public sample. Among the competing hypotheses tested in the study, the results most supported the position that professional judgment imparted by legal training and experience confers resistance to identity-protective cognition — a dynamic associated with politically biased information processing generally — but only for decisions that involve legal reasoning. The scholarly and practical implications of the findings are discussed.

Kahan and I have gone back and forth about how best to characterize the results of the study. He, modestly, seeks to constrain the inferences to the data and to a push back against the vulgar understanding of the judiciary as merely housing politicians in robes.  I think the study speaks to something larger still — the value of legal education & experience in producing situation sense, which enables lawyers and judges (and, to a lesser extent, law students) to agree on the results of legal outcomes notwithstanding their political and ideological priors. Such legal judgment is, after all, one of the practical skills that law school conveys, and which it ought to boast about.

Via Concurring Opinions

View Story

Posted Under :

The Significant Decline in Null Hypothesis Significance Testing?

(Cross-posted at Prawfs.)

Prompted by Dan Kahan, I’ve been thinking a great deal about whether null hypothesis significance testing (NHST, marked by p values) is a misleading approach to many empirical problems.  The basic argument against p-values (and in favor of robust descriptive statistics, including effect sizes and/or   Bayesian data analysis) is fairly intuitive, and can be found here and here and here and here.  In a working paper on situation sense, judging, and motivated cognition, Dan, I, and other co-authors explain a competing Bayesian approach:

In Bayesian hypothesis testing . . .  the probability of obtaining the the effect observed in the experiment is calculated for two or more competing hypotheses. The relative magnitude of those probabilities is the equivalent of a Bayesian “likelihood ratio.” For example, one might say that it would be 5—or 500 or 0.2 or 0.002, etc.—times as likely that one would observe the results generated by the experiment if one hypothesis is true than if a rival one actually one is.

Under Bayes’ Theorem, the likelihood ratio is not the “probability” of a hypothesis being true but rather he factor by which one should update one’s prior assessment of the probability of the truth of a hypothesis or proposition. In an experimental stetting, it can be treated as an index of the weight with which the evidence supports one hypotheses in relation to the another.

Under Bayes’ Theorem, the strength of new evidence (the likelihood ratio) is, of course, analytically independent of one’s prior assessment of the probability of the hypothesis in question. Because neither the validity nor the weight of our study results depends on holding any particular prior about the [question of interest] we report only the indicated likelihood ratios and leave it to readers to adjust their own beliefs accordingly.

To be frank, I’ve been resisting Dan’s hectoring entreaties arguments to abandon NHST.  One obvious reason is fear: I understand the virtues and vices of significance testing well.  It has provided me a convenient heuristic to know when I’ve “finished” the experimental part of my research, and am ready to write the over-promising introduction and under-delivering normative sections of the paper.  Moreover, p-values are widely used by courts (as Jason Bent is exploring).  Or to put it differently, I’m well aware that the least positive thing one can say about a legal argument is that it is novel.  Who wants to jump first into deep(er) waters?

At this year’s CELS, I didn’t see a single paper without p-values. So even if NHST is in decline, the barbarians are far from the capital.  But, given what’s happening in cognate disciplines, it might be time for law professors to get comfortable with a new way of evaluating empirical work.

Via Concurring Opinions

View Story

Posted Under :

Proceduralists’ Shibboleths

(Cross-posted at Prawfs).

Recently a call for nominations came out on the civil procedure listserv: what’s the worst civil procedure case ever.  Nominations poured in–even as Pepperdine’sexcellent symposium on this worst topic was all-but-ignored. Sadly, recency bias trumped careful thought, and a plurality of respondents focused on Twiqbal.  In some ways this is an unsurprising result. Twiqbal hit a sweet spot for modern scholars.  The decisions together appear to be politically conservative (fitting modern progressives’ newfound suspicion of the Supreme Court); they cry out for empirical examination (fitting modern scholars’ newfound love of counting things); and they produce a test whose indeterminacy makes socratic dissection easy.

But here’s the thing: dozens of scholars have spent enormous effort on these problems, and have found essentially no observable effects on party and judge behavior, whether in  or out of Court. In that way,  Twiqbal is a black hole for scholarship — its sucks in quants and non-quants alike in, but nothing comes out.

Consider two recent papers — one by Jonah Gelbach, forthcoming in Stanford, and one by Roger Michalski and Abby Wood, under review.  As a part of a dazzling empirical & game-theoretic analysis, Gelbach points out that “a reasonable observer could conclude that the heated debates over the empirical evidence on Rule 12(b)(6) motion grant rates haven’t—couldn’t—shed any light at all on the actual effects of Twombly and Iqbal.”  (Emphasis added.) Michalski and Wood, studying state adoption of Twiqbal,  conclude that whether “at the federal or state level, attorneys and judges are either not as attuned to procedural changes as many commentators think they are, or plaintiffs were already pleading with factual specificity so as to negotiate earlier and more favorable settlements.” And yet, as they point out, “many academics, practitioners, and commentators simply refuse to believe that the switch from notice pleading to plausibility pleading would not have an empirical effect.”

What’s going on? Is this motivated cognition by progressive proceduralists, who can’t admit that the worst cases of their generation (or any!)  had no measurable effects? (That’s not to say that Twiqbal hasn’t had an effect in the world – just not one that is observable.)  Because their priors are so strong, later evidence is discounted.  As such, Twiqbal is quickly becoming a progressive proceduralist’s shibboleth: to belong to the academy community (and to be welcome at conferences), one has to agree that plausible pleading is implausible, evil, and otherwise wrongheaded.  Defending the decision is like defending Lochner. It can be done, but you really ought to teach at Mason.

Or is it something else? Maybe Twiqbal has attracted attention not because it actually represents a change in practice today (after all, no one was truly engaging in notice pleading) but rather because the cases represent a watershed in procedure – the beginning of a return to a pre-1938 code or fact pleading regime. Like Doleor Printz, it’s a signal of a revolution that’s coming. My colleague Craig Green has worked over the last several years to identify certain cases as iconic, particularly retrospectively — will Twiqbal be such an icon...

Via Concurring Opinions

View Story

Posted Under :

Why Do Peer Review?

(Cross-posted at Prawfs, where I’m visiting this month.)

A recent post by Steve Bainbridge raises a nice issue: how should we think about peer review? Traditional peer-edited legal journals have established procedures (JELS pays honoraria and blinds; JLS pays but doesn’t; JLEO has fantastic peer comments, etc).But in the last five years, most of the top student-edited journals have moved to some kind of peer system – and many of us are now routinely asked, after a student-led process, to review for publication That peer review is never paid, and very often professors are asked to review for journals that have never accepted them. *cough. Yale Law Journal I love and hate you. cough*  That can frustrate even non-curmudgeons.  Why do it?

For institutional credit. I’m aware of no school that gives formal credit for these student-edited peer reviews. Are you? If so, what does it look like? For Law Review credit. One explanation I’ve heard for doing a review for, say, Harvard Law Review, is to motivate them to feel  that they owe you at least a rejection on your own work, instead of a magnificent silence. In my experience, there’s some truth in this: doing peer review gives you the email of an AE, and credit with that person. I routinely have succeeded at being at least read by a journal I’d just done peer review with. I haven’t yet moved from a read to an acceptance.  But I did get a personalized email from HLR once.  It mentioned that they had an unusual number of great articles that cycle, which meant that they couldn’t publish even good work like mine.  I thought that was nifty! Of course, the credit isn’t  merely transactional: being a peer reviewer means you are an “expert” in the field, which should provide your article some kind of halo effect. Of course, this feeling is a quickly depreciating asset, and never rolls over from year-to-year. Use it or lose it! For the love of the game: For those of us who think that student journals should move exclusively to double-blind review, with faculty participation is a veto, participating is a price we should gladly pay. The problem is that the system isn’t perfectly constructed. Law journals should insist that peer comments will be conveyed to authors – this makes the comments much less likely to be petty (“cite me!”) and more likely to be constructive.

Bainbridge argues against mixed peer review systems, but none of his objections strike me as particularly relevant if the process is “student-screen, peer-veto.” That is how I understand the system to work at SLR, YLJ and HLR. I don’t know about Chicago – I would’ve thought their selection involves a maximizing formula and ended with a number.

Via Concurring Opinions

View Story

Posted Under :