Join LibraryThing to post.
This topic is currently marked as "dormant"—the last message is more than 90 days old. You can revive it by posting a reply.
Starting a thread for this, since this isn't really about the Top-Level categories per se.
There's some complaining about the process so far, and I personally have to agree. Some suggestions seem to be accepted. Some not. No explanation. In fact, no explanation that I've found on who even makes the decisions. I realize that someone (or ones) has to have final say, because some folks will never come to agreement in a large group. But it'd sure be nice to get feedback as to why particular decisions were made. And it'd be nice to know what decisions are under consideration.
Right now it doesn't seem to be collaborative. Instead it seems to be "you guys argue about stuff in that group there, and a couple of us will maybe consider some of the things you say and watch for our next announcement to see if we're even listening to you."
That may work out to provide a halfway decent product. But the process isn't really what I thought I was contributing to.
I think that the first list of subjects was to broad for most people. I know I had a problem fitting things in to it. The second list is more specific and close to how I think people look for information in a library. The people who want more specifics, eg. break pets down to types of animals, might as well use Dewey because that is what that system does. Ar some point someone needs to make a decision or the arguing will go on for ever.
I think part of the problem may be the tools being used for collaboration. Talk is great for discussing books. However, the majority of the discussion of the top-level categories took place in a few 200+ post threads. This did two things:
(1) Raised the cost of entry into the process since entering the process required reading multiple 200+ threads and making sense of the bits of disconnected information on the wiki. I know this is a reason I didn't jump into the discussion.
(2) There was little public tracking of the status of various discussions. I've contributed to a variety of open source projects and have often had bugs marked as "Won't Fix." However, in none of these cases did my contribution seem meaningless since a discussion about the decision took place in a forum where it was easy to see how the decision was made.
I think it would help transparency if the data from the classification experiment could be made available on the wiki. At the moment I feel a bit like the work I did there has disappeared into a black hole, it would make me much more likely to do more, similar work, on the sub category classifications if I could see the output from the first lot and understand who it has altered the course of the discussion.
The aims of the classification experiment weren't really explained. For example there is nothing to suggest that any of the suggested alternates would have been worse (or better). I suppose a pure numerical split could be seen so that each top-level as an roughly equivalent percentage share of the total. However as there was some self-selection in the books classified I wouldn't want to give that too much weight.
personally I think the most interesting output from the data would be if it shows where the scheme broke down, in that books were put into different categories by different people, or were unclassifiable under the proposed system.
My naive assumption was that they would use the data to say something like "This type of books had a lot of disagreement about where they should fall, or lots of "undecided" (or whatever the "I can't find an appopriate place for this") from people familiar with the book. Maybe we should re-think the categories. Instead it doesn't seem to have been used much at all -- at least, other than combining autobiography with memoir, they don't seem to have paid much attention to the suggestions.
This seems to be "open" only in the sense of "we'll tell you what we're doing", not "we'll actually use your suggestions". Which is better than nothing, but I think they could have done a better job of setting expectations.
I think you're being a bit too gentle. What's going on seems quite different from Tim's original explanations. Take a look at the blog post calling for help, http://www.librarything.com/thingology/2008/07/build-open-shelves-classification....
One part in particular stands out at me. This is describing what the leaders of OSC were supposed to be:
I hate to seem like I'm stirring up trouble. I really love the idea of the OSC, as originally expressed by Tim. I just don't see it happening quite that way...
Hello! Thanks King Rat for starting this thread. Laena and I have been talking a lot about how to improve this process and we are looking for as many constructive suggestions as we can get. In our
roles as guides for the project, at first we let the threads debate undisturbed waiting for
consensus to emerge. Sometimes a temporary consensus would emerge only to be changed a few weeks later. At some point people started
posting that it was time to make decisions and we should step in. That is when we began to develop the first list of top levels. We continued to read the threads and in late January began to edit the
list based on suggestions that appeared to have the crowd's support (for example that Comics is a format not a topic, changing Cooking to
Food & Drink, creating scope notes, removing Military, etc.)
So my question for you is how do we improve the process? Ssd7 in message 3 makes two really good points. How do we lower the barrier of entry for new participants and how do we discuss proposed changes in a forum where it is easy to see how the decision is made? To that I would add a few more questions: At point is consensus considered to be reached? How/Where should decisions be reported (the threads do not seem an ideal place)?
Lastly, to give you a little more context for what we
do as guides of the project, we spend a lot of our time on the back end getting public library data, working with 8 graduate students to help develop the second levels, etc
I don't understand. It seems to me that the threads are exactly the place to enter decisions. If the discussion is in the threads the decision needs to be there too, along with an explanation about why many of the suggestions weren't taken.
Hi conners! I'm really glad you replied. Speaking for myself, I would love to see you and Laena post more. Leaving the discussions undisturbed makes it seem like nobody's paying attention. I'm sure you guys have thoughts about some of the ideas being discussed, and I would certainly like to hear them more often.
I can understand the motivation to remain hands off and let the group decide, but a greater degree of engagement might make everything feel a lot more open.
"How do we lower the barrier of entry for new participants..."
Occasionally put up-to-date info on the wiki.
"...and how do we discuss proposed changes in a forum where it is easy to see how the decision is made?"
By posting to the appropriate threads.
This seems pretty obvious...
how do we discuss proposed changes in a forum where it is easy to see how the decision is made?
That's easy -- actually discuss them, right here. Engage with the conversation as it's going on, rather than keeping silent and then swooping in from on high with decrees that cannot be questioned. And whatever you do, don't do what laena did, and just post the same form-response on a dozen threads. No response at all is preferable to that sort of insulting brush-off.
When you do that, even if you do take a tiny minority of suggestions into account, you give us no confidence that we're being listened to at all, which is why I for one am not going to bother with the second-level classification -- my expertise is in the shunned pariah area of Science, after all.
#9 conners, I guess it's worth remembering that interaction doesn't signify that you're dictating what has to be done. You may have noticed that we are a rather verbose lot! I think people feel better informed when they at least understand some of the process behind your end of this whole shebang. Tim always does it well when there are big changes discussed (possibly much to his mental detriment ;) ) so even when people disagree with reasoning, they know why something was done (and we all take it so well when we disagree with Tim ... we never threaten to flounce off LT and never darken his doors again ;) ).
From your post, what strikes me is that we don't really know what happens at your end at all (you have grad student underlings ... how great is that ... they can do your every bidding ;) ). Things sometimes get lost under the weight of argument in post threads, so here's an idea that occurred to me. As part of the goal of this system is to create something that is very much 'out there' to be used, any amount of public visibility is good from the outset. There's not really a place for a general dissemination of information about how the project is coming together and what's going on where. So why not kill two birds with one stone and set up a blog to post updates and generally OSC related things? I know that you post to the LT blogs from time to time, but if you set up a blog dedicated to this project, maybe it could be posted to on a more regular basis and used as a central point to explain reasoning etc. Then the discussions could continue over here without swamping the pertinent information and the project would have a more unified 'home' which takes on board LTers and the back-end stuff that's going on at your end? Might make people feel more like they're working together with rather than against what you are also doing?
I would have to echo everything people have already said. The non-experts who are contributing aren't shrinking-violets.
Secondly it very often seems that the mental view of the project leads doesn't get challenged because you don't post very much. For example the science area that lorax mentions is key to a lot of people. We never seem to hear why you think that something is the best solution, we never get to challenge it, we rarely get to understand why our suggestions aren't incorporated. That makes people feel they are being ignored - for example it took five days to respond to the requests for more communication before you replied. Also the current level of communication tends to result in bursty topics - when you do say something there is a sudden flood of replies from us and it probably seems a bit overwhelming. More regular ongoing discussion should regulate that some.
1. Post something about the classification experiment with the top level. With details and statistics - although I doubt that the system recorded everything I would have done. Also there are issues with the entire process if you don't agree with some of the process. It is easy to classify something as science, but if your problem is science being a single top level entity then discussion is the only recourse and we got none of that (except between ourselves).
2. Do not conclude that just because a few people are arguing a case it means that there isn't much support for it. Many times people keep quiet because the proponent of the argument is making a good case and they don't want to dilute the conversation. I think this was true for science.
3. Post something about your principles for this project. We have the facets stuff (and that is good) but do you see self-consistency as a good thing for example.
4. Swiftly acknowledge mistakes. Plays are classified as Fiction, screenplays aren't according to the second level descriptions laena posted. That either needs a correction or an explanation and debate.
On a general point I think you may have to re-jig the top levels somewhat. We have things that don't fit into the current top levels and should (radio). People are still discussing the top-level somewhat in the second level topics. To go back and make changes at the top level wouldn't be a failure.
Thank you for your response. I have not engaged much in discussion because I thought that this would be a data-driven, scientific process using the contents of libraries, LT tagging data, and statistical patterns which would show where Dewey works well and where it does not. I got this impression from Tim's original announcement. What I found instead, however, was an opinion-driven process where a few vocal people dominated discussion and it became clear that BISAC was a foregone conclusion in most cases. I kept silent in many cases because I felt that the anomalies in the data would bring a problems to light. I took the time to classify many of my own books to help build to the data set. If there are 8 graduate students working, I'm sure they've discovered some of these anomalies and produced lots of data to share. I am trained as a scientist, so my natural approach is to use the data as a test to my theoretical model. In my view, conversation before the model is tested reflects personal bias in many cases; you need more data to basically average out error of personal differences. The model should get refined in an iterative process and critical voices can explore the reasons for anomalies and suggest corrections. Preferences for one model over another should be based on evidence clearly presented to all. Make no bones about it, this method is a very different approach as compared with a social constructionist method where a few voices mold the outcome and there's not a true "reality" to model but instead only a set of opinions to poll. I can't imagine working on second levels if the data supporting the first haven't been released; it's like stressing out over epicycles when the problem is that the earth isn't the center of the universe!
As an example, there was vigorous discussion about the category known as body, mind, and spirit in many bookstores/libraries, elsewhere described as occult, new age, spirituality, and other variants I can't think of at the moment. The only thing which held this category together was that it described "marginal" beliefs so that the owners of libraries/bookstores with huge Christianity/inspirational sections could claim that they weren't endorsing heresy while keeping customers from accidentally buying such garbage. In some such stores, you find the books on non-Western religions and religious practice such as meditation and yoga there next to the aliens and UFOs. Basically, what the category amounted to was the stuff the main population thought was BS, heresy, or both. As yoga and meditation got trendy for rich white suburbanites, those books started moving into the health section (look at different versions of BISAC to find this change) and in larger cities with immigrants you could actually find a religion section with more than just Judeo-Christianity. I think it's important to note that the only tag which correlated closely with paranormal in the original testing was magick, and such magick is sometimes a hobby and sometimes part of legitimate religious practice. This category also tends to have a lot of materials on the fringe of Jungian psychology, many of which are BS in my view but some of which relate more closely to psychoanalysis and mythology than they do to cultism. Of course, there are the Prozac worshippers who want to banish all non-pharmacological, non-CBT psychology to the occult section anyway even though such practices have been validated in randomized clinical trials. Regardless, why not put "false" beliefs about aliens, etc. in with the subjects themselves (e.g. life on other planets is astronomy) rather than creating a BS category. Librarians classify, not serve as arbiters of truth for their communities. Alternative viewpoints on a topic should be shelved together, much as Ann Coulter and Al Franken sit side by side.
Another set of contention was the lumping of science versus the fine categorization for some social sciences. In a positive note, the separation of literary genres and criticism into many different top-level categories did not hold up and was clumped more in the latest version. Still, it would have been nice to see the data supporting that.
Dewey's system seems out of date today because his perceived relative values of topics imbalanced how he allocated the number and size of categories. Public libraries vary vastly in what they collect. Some cater to romance readers, housewives, and hardware men. Some have an educational mission and allow adults who cannot afford a university education to read lay literature about topics other than gardening and NASCAR. My *public* library in Chattanooga TN was an educational library, but my library in Cary NC is a recreational one. By looking at tagging and books held by many LT members, we can get a feel for an "average" library and not just what a few influential people find reasonable. There's no rush on this process; better to do it right, because a system too similar to what we have already is not going to attract any real libraries to adopt it. After all, they're not rushing to adopt BISAC as is or the Free Decimal Correspondence, even though it would be little change from Dewey.
One thing I would suggest is that proposed actions/decisions be called out by you guys before they are made. For example: "Economics to get it's own top level." Put these things out possibly because you see consensus emerging on those item. Then folks could run numbers if they want/need. Last arguments can be made and separated from the general hubbub, etc.
(Somewhat on a side note, I personally wouldn't put "the categories are even" or "the categories are a measure of what's important" as goals for classification. As a library/bookstore user, I am concerned mainly with two use cases ((to borrow a term from my software development background)): where do I find the book called X, and where do I find books on subject Y ((or the similar, where do I find books similar to book X)). If there are a lot of books on a subject, I don't mind that category being outsized, so long as the category is coherent. I don't care if each science area gets it's own top level so long as each science area gets a coherent category at some level.)
Something that would be useful is some kind of voting system to help us gauge levels of consensus. On the religion thread we are getting close to an agreed set of categories, but there are a few which we aren't sure whether they belong at second or third level. Nobody is highly invested either way, so it would be useful if we could do a quick hands up count among ourselves to decide without going round the arguments several times.
I want to (very belatedly) add that just counting commenters is a lousy way to judge the level of concern for an issue -- it may just be that, if nobody disagrees, a thread peters out for lack of anything new to say. So everyone may agree on a particular issue and still have it get little discussion.
Oh, and conners (#9), are your "8 graduate students" subject-matter experts? Or more librarians who will ignore the input of the subject-matter experts as you have previously?
I thought that one of the premises of this project is that we want to put things where ordinary people will find them and not listen to elitist subject-matter experts. That is, after all, how we end up with Pets as a top-level category.
19: I think some sort of "up or down" vote system would be nice. This could theoretically done on talk or the wiki. It would be nice to document some sort of processes so everyone is on the same page as to what weight the votes will have and how they are conducted.
could the flag mechanism in the forum be used for this - with the option to flag positively if you agree with an opinion expressed in a post but don't want to post a "me too".
You can only flag a message for negative reasons. Only reviews have positive flags.
yes, I know that's how it works at the moment, but I can't imagine that it would be very difficult to change.
A possible option for the voting idea, without having to change the way talk works, would be if there was an OSC blog. They could post proposed actions/decisions (as per #18), summarize the arguments for and against, and put it to a vote there. That would make things seem more official, and they'd be able to finalize the decision by closing the voting. It would probably do a lot to lower the entry barrier, too, since anyone with just enough free time to read a blog could participate.
I think that is a great idea and will discuss with Tim and David. The logistics of who manages this are an issue, but maybe we can start a thread to discuss that :)
Also, I would like to request that we try and be constructive rather than derogatory on this forum. This is a project to build a classification system for public libraries, which would not exist without the dedicated librarians that run them. We are all here to try and build a great system, together: experts, patrons, readers, everyone.
Libraries wouldn't exist without their members either. Neither library users nor librarians (dedicated or otherwise) can claim any kind of moral superiority.
If you think that there is anything personally derogatory in this (or any other thread) the solution has been provided - flagging abuse. As I cannot see any flags at all that means that there isn't any one post that two people feel is personally derogatory or abusive.
>31 Suncat: That user has also been removed, so I'm guessing it was spam rather than heated discussion of OSC.
I certainly didn't mean to denigrate librarians.
But if this was intended to be "librarians build a free classification system, telling you about it as it happens" then it shouldn't have been billed as open, or at least it should have been clarified that only the opinions of librarians would carry any weight.
Given that mine was the most passionate post on this thread, let me clarify my "derogatory" remarks and then I'll move on.
I began my post by thanking conners for his response. I did this because I fully intended to offer constructive criticism and felt that his response indicated it would be heard.
Neither Dewey nor LC denigrate the POV of the paranormal/occult by placing them in their own top-level categories; they place them near other similar topics (philosophy/psychology). Only bookstores looking out for the bottom line and librarians pressured by conservative communities single them out, but the library classifications as such never did so as strongly as the proposed OSC does. My point was to resist institutionalizing a new discriminatory model which had previously been mostly local practice and is being moved away from even there. It's easy to create new biases in new models with few participating. I worry that this ( http://classifyme.blogspot.com/ ) is where OSC is headed, instead of Tim's vision of a data-driven process.
I aired my dissent because I feel that OSC is a valuable project and I'd hate to see the effort wasted. If I had thought it was completely hopelessly derailed already, I wouldn't have invested my energy in criticism. If asking for more openness and expressing dissent without flag-waving for "our troops" is disloyal, then I revise that assessment. If bones are that easily broken by rhetoric, then I'll take my ball and play elsewhere. I plead guilty and remove myself from discussion and future test participation. The project will succeed or fail on its own merits.
Laena, Conners, anyone? Any progress, decisions, thoughts?
The relatively long periods of silence (10 days is certainly long for a forum) on these kind of issues are certainly not helping the situation.
That's par for the course, though. They haven't ever been active participants in any of the discussions.
It's clear that the expectations laena and connors had were very much at odds with the expectations that the rest of us had, and that clash has probably doomed the project -- laena and connors may come up with something, but it's not going to be the data- and community- driven system that we thought Tim was envisioning.
We have had far longer periods of silence.
The lack of conversation and lack of explanations is why this project isn't working the way that I and others would expect. I would want one of the leads to be very active in the discussions and work at the same level as us plebs in having to explain and justify decisions.
Indeed. We can all imagine it to be a hell of a job, and are grateful to those who've taken it on. Yet, it shouldn't be asking too much to get some information every other day about what's going on - or why nothing appears to be going on. And if doubts are mounting about this whole project ever to take off, it would be fair to let us know. For then, we can go waste our time on other things.
I am more confused than ever now that I have read through the blog that was posted in 35. They say:
In addition to external sources, we're also keeping an eye on the OSC group forums at LibraryThing to keep a finger on the pulse of the thing. Trust us, all the comments, criticisms, and concerns floating around aren't lost on us.
I was not under the impression that the only thing we were doing here was commenting and criticizing. Are the people contributing to that blog also active in the forum or are they just watching?
I only entered this process in a real way once the Top-levels were announced because I thought it would be fun to work on some areas where I know something about the field. I think if you look at my posts I have been positive about the process. I will not claim to know what is going on in the background; however, if all the work is going to be done by 5-6 people outside of the LT forums and then just announced to us, I don't see how this is collaborative. I was under the impressions that the subcategories being discussed in this forum would eventually evolve into the second level. However, at the same time, there is this group of graduate students working with the blessing of the project leaders and, as far as I know, not actively involved in the discussion here.
This project needs defined processes. I have stopped being active in Talk because of three issues:
(1) The conversations I am a part of have only been between a few people and are not likely to go anywhere without some more input. This is where the leaders or their graduate students could come in.
(2) It is completely unclear how we should codify consensus that arises in conversation.
(3) It is unclear whether (a) the work being done here is being used and (b) if the system will be used by anyone once it is completed.
I understand the leaders of a project not wanting to get involved in some sort of flamewar, but that is not occurring here. Emotions are high at points, but people are bringing up legitimate concerns and simply want the leaders to be engaged in this discussion.
Defining the processes should not be too difficult. We have a wiki and a forum. Both of which could be used to conduct discussions, take votes, and record decisions.
Greetings! We have been discussing solutions to some of the problems we have been having with the OSC process as it develops and evolves.
Our proposed solutions fall into two categories. First, the facilitators (Laena & David) need to do a better job communicating where we are at and how decisions get made. This is hard because all of the forum threads are so active and we have very full time jobs. So, in order to do that, some participants of the project need to take on more responsibility to monitor and summarize the discussions. As the project has grown in size and complexity, two people alone cannot guide this ship!
Here are some of our specific proposed changes based on feed back:
1. Create a blog separate from LibraryThing. Done: http://openshelvesclassification.blogspot.com. This will serve as our one stop information and decision clearinghouse.
2. Before changes are made to the levels or scope notes, someone will post a summary of the pros and cons based on the conversation in the threads and what the perceived consensus is. This will give members a chance to discuss a specific change before it goes into effect.
3. The blog will also be where Laena, David, and others can talk about the work going on obtaining and testing public library data.
4. We need help monitoring threads and organizing data. This is where people willing to take on more of a role in guiding the project come in. Each top level thread needs to will have a dedicated monitor. There are 42 top levels, with sometimes more than one thread for each.
5. The monitor serves as guide. The person will monitor the discussions in the forum and track what specific changes are proposed. When consensus seems to have emerged, the monitor will summarize the discussion surrounding the change on the blog and state what the proposed change will be.
We hope this process will produce greater transparency and more clear communication between contributors. It will also enable new people to enter the process in a more seamless manor.
Please let us know if you would be interested in monitoring a thread. 8 library science graduate students have already stepped up and offered to help us, but we need 35 more people to volunteer. Check out the blog to see which top levels still need to have monitors.
This topic is not marked as primarily about any work, author or other topic.