Lots of folks on the internet have been upset about a SCOTUS ruling against the FCC’s net neutrality regulations. I accidentally got on Lawrence Lessig’s rootstrikers mailing list when I tried to make a critical comment on a Citizen’s United blog post, so I’ve been getting lots of messages on that. I have always been unclear on why I should care one way or another, since the sweeping apocalyptic changes being predicted seem to have little basis in empirical evidence. After all, the regulations only date to late 2010, and it’s not like the internet was radically different before. ISPs violating neutrality seemed to be the exception, and generally reversed after a customer backlash. However, Timothy B. Lee has an interesting article on how the economics of the internet have been changing, making net neutrality rules (or something analogous to them for the new environment) relevant even if they weren’t before. Eli Dourado has a response here, claiming Lee is misunderstanding the nature of transit and the Comcast/Cogent/Netflix deal, but that might just further Lee’s point that we need to think in terms broader than “net neutrality” in the original Tim Wu sense (which may not even be strictly desirable for different use-cases).  To me the bigger question is why the European ISP market so much more competitive than ours. If we could achieve that, the consumers could have neutrality if they really wanted it.

On another note, the neutrality issue frequently gets framed as a threat to screw over small companies in favor of big established players. But the logic of price discriminating monopolies (which are actually more efficient than ones unable to discriminate) leads to those with a greater willingness to pay having to pay more. They are actually likely to subsidize those less willing to pay.

Just learned of his passing via Mankiw (via Eli Dourado’s twitter feed). He’s most closely associated with “economic imperialism”, which might be summed up as applying economics to subject matter normally investigated by sociology. Plenty of pop “economics of everyday life” has followed afterward, with his clearest acolyte being Steven “Freakonomics” Levitt. I cited (or re-cited, since Albert O. Hirschman’s citation is what alerted me) a classic Becker paper on the economics of irrational agents here.

I commented on Gurri’s thread I was discussing earlier to chide Morgan Warstler. In case you don’t know him, he’s a libertarian-leaning Republican (though a detractor of neoreactionary tendencies) who frequently comments at blogs like The Money Illusion and bangs on about some hobby-horses. I’m often irritated by his confident proclamations which don’t seem to have any strong basis in subject familiarity, much less domain expertise (this unwarranted confidence also irritates me in neoreactionaries). Inspired by Bryan Caplan, a while back I made a 10:1 bet (in his favor) that Rick Perry would not get the nomination. For years afterward I would mock him at every opportunity about that bet and whether he was going to pay up, assuming he slunk away because he didn’t want to admit he got things wrong (although he started off his own blog doing just that). But it appears I did not give him enough credit. So credit where it’s due: plenty of internet blowhards will never agree to bet at all. And afterwards they may try to weasel out or minimize the results (even Bob Murphy seems guilty of the latter that to some extent, which I suppose shouldn’t be surprising given the Austrian attitude toward empiricism). But not Morgan. I’ll probably still be annoyed by some of his confident pronouncements, but a man willing to put some money where his mouth is and admit when he’s wrong should feel free to annoy me.

I haven’t been a very regular reader of Gurri & the Umlaut, but from what I have read it would be hard to think of a blogger better suited to write this. I vaguely recall seeing myself listed on the periphery of neoreaction, which is fair enough if Robin Hanson & Razib Khan are as well. I am of the right in part because I’m so far toward the latter end of Jacob Levy’s rationalism vs pluralism axis that he would not consider me included in the big liberal* tent (although I certainly have rationalist impulses). So it’s to be expected that I agree with Gurri’s critique of these neoreactionaries as being rationalist constructionists.
*As in “classical liberal”. (more…)

Maybe.

A number of times I’ve linked to this EconLog post linking a talk from Robin Hanson, on the importance of overcoming bias before you take up a cause. However, it appears the video is no longer there. It can however be found here.

An essay by Yvain that I enjoyed and sometimes link to appears to have fallen victim to url-rot. An archived version is here., but rather than requiring people use the Internet Archive (which is sometimes overtaxed), I’ll just copy it below.

 

(more…)

Marginal Revolution led me to this piece by Charles Blahous. The name sounded familiar, so I looked him up on Wikipedia. Most of it was uninteresting except for the bit where he got his phd in computational quantum chemistry, then became a legislative aide to a Wyoming Senator. What? Who goes from the first to the second? Entitlement programs seem so boring, a big demographic wave is coming (or arriving as we speak) and everyone can see it. You can talk a lot about it, but it’s mostly going to be ignored because there are so many political stakeholders and veto points. Karl Smith would even argue that can-kicking is the rational thing to do, and in fact the best we can do if you want to get depressingly existential. After working to get a phd in the hard sciences, who would find that more interesting and a good use of their time?

Taken from here. Note that whether torture can work is not the same question as whether we should do it. (more…)

I’ve heard a bit about Scala*, but I’m still very much a complete newbie. The languages I’m most familiar with are Java (along with C#/C++/C) and Python, so I’m treating it as a sort of mixture of the two. Messing about in an interactive tutorial I got to a section treating the interpreter as a calculator. In a regular handheld calculator dividing two integers commonly results in a decimal, but in a programming language (like Scala) where those are completely different “types”, it’s likely to round the result into another integer. So to use math theory speak, integers are closed under division, their “domain” is equal to their “image”, “range” or “codomain”. I decided to try changing that (should the verb be “opening”? “unclosing”?). I noticed that the Int class has a toDouble method, and sure enough Double does as well. I hoped that ducktyping by itself would be sufficient, but unfortunately what would be a primitive in Java is merely an Any in Scala, and when I tried ducktyping it insisted they needed to be AnyRef instead (analogous to an Object in Java). Fortunately, implicit classes allow us to treat them otherwise.

implicit class SuperInt(val i:Int) extends AnyRef {      
  def toDouble = i.toDouble     
}
implicit class SuperDouble(val d:Double) extends AnyRef {      
  def toDouble = d.toDouble     
}
def divide( numerator: {def toDouble:Double },
            denominator: {def toDouble:Double }) = {  
 numerator.toDouble/denominator.toDouble    
}

The divide function works the same whether you pass in an Int or Double. Doing some googling, I found that implicitly converting Any to AnyRef is frowned upon in Scala, but I’m way too ignorant to know the reasons, having only found out they were different things from the same people doing the frowning. Those who know why, or who have suggestions for the right (what’s analogous to “Pythonic”?) way to do things are welcome to chime in.

*I don’t think Steve Yegge’s ideological ranking of languages was the first place, but it’s entertaining enough to link. And yes, annoyance with the “liberalism” of one language (JavaScript) did inspire me look up something in the opposite direction. Although I still don’t actually know what about Scala makes it more “conservative” than the C family.

I was familiar with the phenomena of two (or more) different people making the same scientific discovery around the same time. I wasn’t familiar with the name though. A handy thing to refer to.

EconLog has had good guest bloggers. Eric Crampton did it at one time, he can now be found at Offsetting Behaviour. David Henderson was supposed to be a guest but became permanent, and while his content tends to rather generic libertarianism (with perhaps some extra emphasis on pacifism and the rare detour into monetarism) I’m glad of his presence due to the store of anecdotes he’s accumulated in his time, and his thoroughgoing Canadian niceness which contrasted with bitter former co-blogger Arnold Kling (and fellow Canuck Steve Williamson come to think of it). Garrett Jones was one of those people on Twitter who needed to start a regular blog, and had some good stuff, but I was disappointed that he often brought up Real Business Cycle stories when he himself had earlier explained how RBC no longer fits the “stylized facts” of the economy. More recently Alberto Mingardi and Art Carden joined, with both mostly serving up generic libertarianism without necessarily much economic content. Distributed Republic is no more (when trying to read an old post I got an exceeded bandwidth notice, not sure if anything changed to cause that), but I’m sure there are plenty of other generic libertarian sites they could contribute to. As it is they don’t seem to be part of the same econblog “conversation” I expect from the site. It’s almost enough to make me miss Kling, since his fondness for persisting in views he knew to have negligible supporting evidence at least got him in some arguments with other bloggers. Almost.

Gene Callahan regards that story as largely mythical. Those knowledgeable about the past are invited to toss in their two cents.

This isn’t “frequency illusion” because my subjective frequency is unchanged, but the “Baader-Meinhof phenomenon” is likely making this diavlog on the psychology of optimistic bias more salient. That’s because I was reading a bit of  Daniel Kahneman’s “Thinking Fast & Slow” yesterday concerning how good & bad moods affect System 1 vs System 2 thinking. The actual segment of the diavlog I linked to is titled “The optimal level of optimism”, but (as is made clear by the participants) that level is not “optimal” for accuracy. The depressed are known to be more accurate (this is called “depressive realism”) except in regard to the persistence of their depression. Tali Sharot claims in the link that the severely depressed are also less accurate, and that the mildly depressed are most accurate.

On the other hand, while searching on Overcoming Bias for support regarding “depressive realism” I came across this old post casting doubt on the concept.

On an unrelated note, Kahneman made a big deal out of priming in the book, beginning with the experiment where the word “Florida” causes people to walk slower (though he mentions later that those who dislike the elderly can react in the opposite way). He even says “You have no choice but to believe that you react this way”. So kudos to Kahneman that he has been so adamant about the need to replicate the priming studies in the wake of some failures to do so.

I’ve been quite delinquent in posting here due to procrastination over a book review which guilts me out of more active participation in the blogosphere. But this comment from Greg seemed to merit being made into a post of its own: (more…)

Follow

Get every new post delivered to your Inbox.

Join 32 other followers