David Gugerli a historian specialized in the philosophy of science gave a lecture on the history of databases and of data management as a signifying practice. In a Deleuzian fashion, he states, knowledge operates in distributed networks. The world is a database and database management systems are shaping our world. It is a very big market (think IBM, Oracle and Microsoft) with a high level of client lock-in where people are very dependent on their data management systems. Before however (in old databases) data was structured in a hierarchical tree-system. In this way the structure of the database determined which questions could be asked. With the coming of the search and query language every possible combination of entries could be interpreted based on recombination and relation. This meant that interpretation of data became independent of the data structure and place. These new systems were highly efficient and made for new and unexpected questions. They were also more narrative based. Gugerli compares this rise of the relational database model with the rise of critical thinking in the 60’s. Critical thinkers like Barthes, Derrida and Foucault found that a literary work could be seen as a machine that can deliver interpretations. It is a galaxy of signifiers, it has no beginning and we can gain access to it in different ways, none of which is authorative. The interpretation of the text cannot be determined by its author, the interpretations the reader produces when reading are part of its meaning. These traits can also be seen in the development of database concepts: the cultural consequences of these changes are stupendous, according to Gugerli. They do not only influence the relation between author and text but effect any form of information processing in every format. This also has big societal consequences for it has changed our information processing and caused major changes in software structure. The search society and its idea of recombination operates in real time. This requires continued change management, Gugerli concludes, a permanent fluctuation of its composition, of its practices of search and query, which shows the importance of the underlying (relational) database system. The full text of Gugerli’s talk can be downloaded here
Matthew Fuller took a closer look in his talk at the alternative modalities of search and how to develop them. He states that search engines have a morphology, a scheme that generates body, they have different internal structures. Search engines can be seen as cultural machines, they connect information and knowledge. Fuller states that since the rationalization of culture is impossible, this explains the noisiness and the inaccuracy of search engines. Search engines also focus on the analysis of users and the identification of situations. This is an even more abstract process than mere personalization, where the user is not individuated but recognized as a force that produces information. As Fuller states, we need to think on the basis of populations of data producing subjects instead of on the basis of individuals. We need to focus on the dynamics and conditions of search engines. Fuller goes on to discuss different kind of search engines, in order to delve deeper into the morphology of search. Viewzi for instance adopted the aesthetics of the iPhone into a search engine, it makes maps of images and lets you see if they are linked in any way. Oamos views information search as an experience. It is not a full search engine but it uses results from a certain amount of search engines and it looks at relational information. Kartoo and Liveplasma are examples of network visualization interfaces. DAUM, NAVER (both Korean) and Directionless.info of context driven engines. With these examples Fuller gave a good overview of the multiple possibilities for search and the different possibilities for interface design. Fuller states that delving deeper into the complexities of the web and its users and reflecting this in its design is the challenge for the next wave of search.
Lev Manovich’s lecture focused on how we can learn from Google. How can the search engine design serve as a new methodology for cultural analysis, or, how can we use Google as a tool for cultural analysis? First of all he stresses it is important to look at the size of our data. Most of the time cultural analysis focuses on a very small sample of cultural production, where a search engine uses every accessible web document (and now Google also indexes Twitter and Facebook) offering much larger scale for analysis of contemporary cultural production and interpretation. Secondly, when it comes to categorization, mostly cultural objects are placed into small numbers of genres/categories. A search engine can make an analysis of each web document to generate its unique description (using 200+ signals). As Manovich states, although significant research in automatic classification of web pages into genres exist, Google does not use it, because it wants to give you any page that is most relevant. Thirdly, when it concerns “links”, traditionally cultural criticism gives an analysis of a small number of selective links (“influences”) between a given object/person and others. A search engine on the other hand gives a systematic consideration of all (explicitly defined) links between a given web page and other pages. Fourthly, Manovich states that what is very important in cultural production, is that it builds on the old, on old features (characteristics, attributes, dimensions), or a small number of subjectively selected features different from text to text. A search engine focuses on lots of features and can take the interaction with the user into consideration. Where traditionally in cultural analysis theoretical work does focus on reception, in practice it gives more an analysis of documents as experienced by a critic. Web analytics on the other hand can give a good analysis of user interactions with a web site. As Manovich states, cultural analysis looks from one critique, and is thus is not empirical, as it does not look at the user interactions with cultural products. Next, when it comes to zoomability, in cultural analysis the focus is mostly on a document, a creator, group, period or paradigm with highly uneven coverage. With Google search technology or Google trends the search patterns of billions of people over a number of years can be analyzed. Think of the possibilities of Google Earth and Google Street View. As Manovich states, software developed by the digital culture industry and also by the academy often contains innovative theoretical ideas about culture which are embedded in its design (i.e. what software does to calculate the results). However this design is often used to support an outdated (i.e. 20th century) understanding of culture when it comes to search (looking for particular members of a set) or classification of culture into a small number of genres. Manovich understands the search technology as a new paradigm for cultural analysis: what if we take the principles from search engines, web analytics and Google trends (interactive visualization of patterns), and imbed them in new software tools for analysis? In this way we can extract features from each document in a set, instead of using the features to classify documents into a few classes, and we can visualize the patterns and the variability across a set. As Manovich concludes, the old search paradigm is based on knowing what you want to find, where the new search paradigm is based on finding relations. Manovich ends by asking: might Google take over the Humanities?
Alessandro Ludovico and Christophe Bruno both focused on the potential search engines offer to art, by both using search engines and by reflecting upon them (in an often critical and playful manner). Ludovico discusses his project Google Will Eat Itself (GWEI), in which he established a fake website about online marketing, and subscribed to the Google AddSense program, which lets you publish Google textual adds on your web site. After opening a bank account for the project, they developed a special software which would generate an unique IP address, simulating a user, and would make automatic clicks on the adds by Google. As Ludovico explained, it would be impossible to define a fraudulent click from a true one, making it hard for Google to ban them. Ludovico sees GWEI as a conceptual artwork, as a scientific experiment. Through this project they got a lot of attention in the media. As Ludovico concluded, the worst enemy of a giant is not another giant, it is a parasite. If enough parasites would suck out enough money, it will suck the giant empty. Ludovico wants to dissect, decode and expose these giants through conceptual artworks and theories in order to create cultural anti-bodies. Christophe Bruno also discussed some of his recent artworks or Google hacks. He mentioned Ephiphanies, a Google Poetry hack which he developed (based on Joyce’s walks in Dublin during which Joyce would collect random sentences overheard). Keywords typed into the Ephiphanies machine will collect random sentences from Google and send them back to the program to form a new poetical structure. Poet Ton van ’t Hof gave a short introduction on Flarf, a cut-up technique using Google search results, rearranging them to make a new text. As Van ‘t Hof explains, these kind of cut-up techniques are not new, as they have been used before in the 1920s by the Surrealists, and in the 50s and 60s by for instance Burroughs. As Van ‘t Hof states, its first practitioners practiced an aesthetic dedicated to ‘the exploration of the inappropriate in all its guises’. The idea behind Flarf is to mine the internet with odd search terms and then to distill them into often hilarious and sometimes disturbing poems, or plays on other texts. According to Van ‘t Hof the genre is very popular now and would not have existed without Internet or without Google.