- Written by Elin Waring
- Published: 10 March 2014
In 1939 Edwin Sutherland gave a presidential address to the American Sociological Society, and it made page 12 of the New York Times. Nine full paragraphs summarized his talk, reporting that "Dr. Sutherland described present day white collar criminals as 'more suave and deceptive' than last century's 'robber barons' and asserted that 'in many periods more important crime news may be found on the financial pages of the newspapers than the front pages.'"
That speech, on December 27, is usually considered the moment at which the term "white collar crime" was invented. Of course crimes by elites, crimes of deception, financial crimes, and crimes in business had been written about at least since Leviticus, Sutherland pulled all these concepts together with one incredibly evocative phrase. I've often wondered if this story was really as clear cut as the story makes out. I took a look at the Google books n-gram data for "white collar crime" and "white-collar crime" (full size graph).
"White Collar Crime" Usage Over Time, 1920-2008
The n-gram data certainly seems to be consistent with the story. The data are far from perfect (we know Google scanned a lot of books but not how they chose them; it seems certain that it was not random; also sometimes they include modern additions or annoations) but the data are always interesting to consider.
I also was curious about whether the term white-collar crime displaced anything else but that seems not to have been the case, at least in any obvious way. Still, it is interesting to see the rise of the term "financial crime" since the mid-1970. The persistence of "robber barons" (with its own history) is also fascinating since it is also such an evocative term. (full size graph)
Use of Five Related Terms, 1920-2008
Sutherland's book White Collar Crime was published in 1948 and it is probably fair to attribute to it what is essentially a doubling of the use of the term (if you combine uses with and without the hyphen) at that point. But even before then his work shook things up in the world of sociology and particularly the sociological study of crime. I've written before about how Robert Merton revised his "Social Structure and Anomie" paper in response to Sutherland, and how that revision made the version that appeared in Social Theory and Social Structure so much more powerful. The fact that criminology students are often assigned the 1938 version is maddening to me.
I am really interested in the peak that happened in 1980, which is right around the time that all the federal money that paid part of my way through graduate school and funding the work that led to Crimes of the Middle Classes was awarded. That is around the same time as the Conyers report and the founding of the National White Collar Crime Center.
Overall, at least for now, it seems as though the story of Sutherland's invention of white collar crime as both a phrase and a form of classification seems to be true.
- Written by Elin Waring
- Published: 31 July 2010
Steven Weber's book The Success of Open Source is a book I read when I first joined the OSM board. There are a lot of books about Open Source but Weber's is the one that makes the most serious effort to think about open source from a social science perspective. Which is to say, it incorporates a serious effort to use somewhat systematic empirical data and to apply a number of theoretical concepts from political economy. In other words, very much from my world and I'd guess almost no one actually in the open source world has actually read it all the way through. This is just like the fact that I've never read, I don't know, Knuth. I can and do read about code and algorithms and so on--no problem reading and understanding most of the mass market books on how to write PHP, but let's be real. There are some books that are for people who have taken the computer science classes that are for computer scientists and there are some that are for people like me who are code curious. So that's why people who write software read books like The Cathedral and the Bazaar or Dreaming in Code (both important books) that give an anthropology-lite treatment of the open source world, but they aren't really reading serious social science. If nothing else you can tell by the bibliographies.
So I've decided to start rereading Weber's book three years later. When I first read it there were some things I thought he got wrong and some things that i thought he got right.
I'm very interested in the general issue of how people organize themselves to get things done. Sometimes this is done in an intentional and self conscious manner, but at other times and much more often this is shaped by social and institutional forces. For example, early in my time in graduate school there was an important article by two faculty members in my department (Paul DiMaggio and Woody Powell) called "The iron cage revisited" institutional isomorphism and collective rationality in organizational fields" which helped to clarify this. The short version of the idea is that organizations, especially those in the same general environment or field, often end up very similar to each other. The question is why? Is there just some "natural" way for organizations to form? Or can we understand this as a consequence of social and institutional factors. Of course, answer number two is correct. People in organizations think those organizations just happen or happen solely because of conscious decisions they make, but that kind of thinking represents two extremes of the same wrong approach. The first has no room for people to make decisions. The second overstates the level of autonomy that people have when organizing. So, this line of work helps us understand why open source projects tend to follow one of a few patterns.
Weber identifies four key strategies that open source projects use to "manage complexity among a geographically dispersed community not subject to hierarchical control." These are "technical design, sanctioning mechanisms, the license as explicit social structure, and formal governance institutions." (172)
Each one of these deserves a careful look, so I'm going to do separate discussions. Taken together they help explain a great deal.
- Written by Elin Waring
- Published: 19 July 2010
Coincidently my starting in putting together a literature review on open source happened at the same time as the conflict between the WordPress project and the makers of a theme called Thesis went public (#thesiswp; Mixergy). The arguments and the nastiness are all too familiar to those from the Joomla! world. Interestingly, in my very first literature seach on the term "open source" I came up with a whole set of law review articles on licensing. I guess that makes sense since licensing is at the heart of what makes open source/free software different. Interestingly, a lot of them are older, but they are useful nonetheless. So the first article I read was ...
There are many different kinds of law review articles. Some make an argument and some, like Carver's, lay out the state of things at the time they were written. In 2005, the SCO v. IBM case was in a much more unclear state than it is today, the Progress case had been decided. A law review article is not case law (way too often I see foss people quote articles as if they are), but this is an attempt to summarize the case law and to explain the issues that remain open.
One of the things that was interesting to me was his discussion of dynamic linking, which is one of the complicated areas where people get pulled in discussion of the GPL. Reading it helped me see that the whole topic is pretty irrelevant for PHP.
Static linking involves embedding the library in the independent program when it is compiled.89 A dynamically linked library is not incorporated into a program, rather it exists both in its own place on a computer’s hard drive and in its own memory space while in use.90 Multiple programs might even be dynamically linked to the same library and communicate with it while it sits in a single memory space.91 Consequently, some have argued that a proprietary program could dynamically link to a library licensed under the GPL, or a program licensed under the GPL could dynamically link to a proprietary library, and in neither case would a “derivative work” of the GPL-covered work be created, since the two programs retain distinct existences and only tenuous connections.92 (469)
To me, admittedly not a computer scientist as well as not a lawyer, neither static nor dynamic linking really describes how php brings together core applications (like Joomla!, WordPress or Drupal) and code that extends them, even when that code is highly modularized or encapsulated. When Joomla! is rendering a page it has the core and the extension in the same memory space at the same time working together to produce what the user sees, save things in the database, cache or do other things.
So the whole issue of whether or not dynamic linking creates a derivative work seems like a red herring. Making what in Joomla! is called an extension is a way of improving the application by adding new functionality. Improving the application is a right guaranteed by the GPL and is part of the power of the GPL to produce superior code. Improvements have to be licensed under the GPL. So even though the GPL was really invented with thinking that was focused around compiled languages in some ways it seems less ambiguous for interpreted languages like PHP than for others.
Carver continues that "ultimately, the issue of what constitutes a derivative software work must be addressed by statute or the courts." I'm sure that's true, but coming from the perspective of a writer and a bug fixer and occassional module and plugin writer, I think there is a whole separate issue. That is one that has been highlighted in the WordPress blow up, which is much more like a print or music copyright issue.
When someone or some group wants to reverse engineer an application they are generally advised to use a "clean room" procedure. That is it should be written by someone who has never look at the code of the original application and who is just given a list of specifications. That way you can document that no concepts or code were derived from the original application. That's simply not the way that open source code development works. This came up again in the WordPress issue when people asked whether the Thesis author had learned from studying the theme included in the distribution of WordPress. Then the whole story went from bad to worse with the revelation that actual GPL code is included in the Thesis theme.
One of the things that is really nice about taking the time to read serious articles is doing is highlighting for me why it is that so many people have trouble with the issue. Carver's article is 39 pages and really carefully written, but the arguments people have about licensing online tend to be quick exchanges. It's not fair to say they are 140 characters because there are clearly some longer well thought out discussions. And I will say that a lot of them represent new people who haven't really thought through things before. Each time, new people have to go through the same round of arguments and questions. Tweeting, blogging, forum posts are their way of thinking out loud. It's alway variations on the same questions (arguments) because those are the questions people need to ask in order to work their way through the logic. it's not that different, really, than me knowing in advance what 80% of the questions willl be for a particular topic in a class. So, you just patiently listen, answer, listen, answer. And you think it through some more so that your depth of understanding is increased, which this article helped me to do.