fredag 30 september 2016

Before: Theme 5

What is the 'empirical data' in these two papers?
For the first paper, Finding design qualities in a tangible programming space (Fernaeus & Tholander), the empirical data is the data that is created and observed during the testing of the prototypes. The same is said about Differentiated Driving Range (Lundström), however they also collected data during interviews.

Can practical design work in itself be considered a 'knowledge contribution'?
Of course it can! By doing practical design work, you actually get your hands on the material and the interactions that theoretically could work. By trying it out practically, you might find new reasons why a concept should be designed like you did, or perhaps why it should not. If you do not reach any conclusions about the concept itself by doing it practically, at least the contribution to knowledge is the way you carried out your design work and that by itself could be of great help for other researchers and/or designers when they want to try out similar concepts.

Are there any differences in design intentions within a research project, compared to design in general?
I would say that the design intentions within a research project aims at improving or perhaps breaking new ground when it comes to design concepts, as compared to design in general when you usually try to follow and implement the best practices which are created when doing research. What I have noticed when reading different papers within design research, is that most of the time the researchers do not try to make things look at good as possible, which usually is the goal of general design work. A lot of times the design process within research projects is very iterative, changing quite a bit during the process as user tests generate feedback that help form the end result. This contrasts how general design work sometimes progress, where you are not always able to do user tests during the process which gives it more of a genius design approach where the designer puts him or herself above the user when it comes to design choices.

Is research in tech domains such as these ever replicable? How may we account for aspects such as time/historical setting, skills of the designers, available tools, etc? 
I guess this varies as you could have different goals with your research. If the goal is to try and find the best method of evaluating a certain design pattern, then yes, of course you could replicate the research method with the difference of changing some part of it. However, if the goal is to create the most optimal way of implementing a menu system in a mobile web environment, doing the same research with different designers could create very different scenarios, not to mention all the different user groups and different technologies that could be used. All of these parameters could highly affect the outcome of the study. It would be easy to say that you could just use some kind of mockup software on a "regular" computer to try and dampen the impact of the previously mentioned parameters, however there are many different softwares for creating mockups, all with their own design elements even if they are to be used only for wireframing. So even trying to do things in a basic way could be highly differentiated depending on the tools used. The answer then would be both 'yes' and 'no'. Some more 'meta' concepts could be carried out independent of the technical development process, but some things are just too particular to be able to replicate in a good way when it comes to research.

Are there any important differences with design driven research compared to other research practices?
The big difference to me is that design driven research often is more qualitative and iterative as I have described above, when compared to other research practices where usually a lot of data (read quantitative methods) are used in order to understand a certain problem or phenomena. Design research is more abstract than many other research practices, meaning that there is not a single way to solve a problem - it is very non-binary. There is no clear 'rights' or 'wrongs'. What works for one user group may not work at all for another, which makes 'one-size-fits-all'-solutions quite hard to find.

måndag 26 september 2016

After: Theme 3

The first task of this week's theme was to find a high quality journal relevant to media technology research with an impact factor of 1.0 or above. I had only recently heard of the term "impact factor" maybe one week before this task, in one of my other courses where we were supposed to find relevant research to our work. We did not really get a proper, in-depth explanation of the term however. What I know about the term after this week's theme is that it has to do with how many citations the articles in the journal has gotten in the following two years after the publication of the journal. For a journal, the impact factor is calculated as the total number of citations in the two following years after publication, divided by the number of articles published in the journal.

Continuing on the impact factor, what I learned additionally is that the impact factor for a journal is much easier to find than the impact factor for a single paper, however I guess it is rather easy to just see the number of citations for the article and check that against the publication year to at least get a quick overview of how big of an impact it has done within its research field.

Reflecting on the paper that I chose, I took my time to find one that I actually thought was interesting to read. I only read through it through rather quickly, as I did not have as much time to do so as I would have wanted. The paper was about how using natural language when searching for something on the web is more effective than using keywords. Personally, I believe this is the way to search in the future as it is much easier to find the correct information about a subject you do not know much about, than to try and find the correct keywords before actually finding the information you want.

The second part of this week's assignment started with having to explain what theory is to a first year university student. I remember how it was when I was doing my first year, and reading academic papers was not (still is not) very easy. Therefore, I chose not to just use the words of Sutton and Gregor, but to actually try to explain the content in a very basic way. Hopefully, my explanation was good enough and covered enough of the material to be seen as a "correct" explanation of what theory is and what it is not. At least, I feel that I have gotten a much better understanding of the concept after this week!

Deciding what type of theory was used in the paper I chose was not easy, it was probably the hardest part of this week's assignments. I actually changed my mind a few times before deciding that it was of the type "explanation and prediction". I still do not know for certain if it is the right theory, but I did not get any protests during the seminar so perhaps it was correct after all. Some of the theory types were quite similar and it is not an easy task to make a strong distinction between them for this type of task.

torsdag 22 september 2016

Before: Theme 4

For this week's theme, I have chosen the article Everyone's an influencer: quantifying influence on twitter, published in WSDM '11.

Which quantitative method or methods are used in the paper? Which are the benefits and limitations of using these methods?
As the name of the article suggests, the main quantitative method used is the collection of millions of tweets over a time period of two months. Right here, we have the first benefit - it is very easy to get a hold of data, and the speed of collection and data creation is astounding as there are millions of tweets created every day. The authors do not have to create their own data, only read and collect it. This also brings us to another point which could be seen as both a benefit and a limitation. The authors do not have control over the data created. This aspect is great as the data is pure and objective, however they can not directly control the kind of data they get, as you would be able to in a user study where you could formulate the questions yourself.

The purpose of the study was to find tweets containing a link-shortened URL's to try and see how that URL spreads over Twitter. To do this, they analyzed the followers of every person doing a tweet, to see if they retweeted the URL, and then repeating the same operation until they reached the end of that URL. One limitation they bring up is that they only analyzed active users, which makes their results not reliably representable for the entire Twitter user base.

What did you learn about quantitative methods from reading the paper?
I have learned that using existing data like tweets is highly desirable when conducting empirical studies of diffusion such as this one. It is a great way to collect a lot of data without having to manually create a set of questions for research, which is hard to do in a objective way.

Which are the main methodological problems of the study? How could the use of the quantitative method or methods have been improved?
The authors do not try to predict how a ”word of mouth” campaign could spread on social media in general, they only analyze how it spreads on Twitter. The Twitter ecosystem when is comes to followers and friends does not exist on every social media platform out there, which makes this study quite unique and bound to Twitter. To make a more general approach, they could have done the same study on several social media platforms to try and get a more general understanding of how things spread on social media.

While two months is a long time when it comes to social media, the gathering of the data could have been conducted during a longer time period, or perhaps during time periods with an interval such as every other month for a year.


Drumming in Immersive Virtual Reality: The Body Shapes the Way We Play
The main point of the paper was to see if people act differently when their body suddenly is transformed into another in an immersive VR environment. The first thing I reacted to was that the entire test group were caucasians. Is this something the authors did because of technical limitations, or did they only want to study how caucasian people act?

The results demonstrate that full body ownership illusions can lead to substantial behavioral and possibly cognitive changes depending on the appearance of the virtual body. I find this extremely interesting from a story-telling/gaming perspective. Only in rare cases do I actually "connect" with a game character enough to let the character in a way decide how the game will progress or how I will act, and in those cases the story has been extremely well done. This makes me wonder if you could use a VR body illusion as a substitute for a stronger story? Would a player connect more with the game character if they in VR actually inhabited the body of the character? If we are to believe the results of this study, that would probably be the case!

Which are the benefits and limitations of using quantitative methods?
One of the largest benefits of quantitative methods, to me, is that when using large enough data sets, we can generalize the results to fit an entire population. The immediate limitation to this is that the data set you use for your quantitative research has to be random enough in order to get data to represent a population. Another benefit is that you choose to only research a subset of a population, you could also hand-pick your data to fit said subset. Quantitative methods are also a lot less time-consuming when it comes to gathering and analyzing the data as compared to qualitative methods.


Which are the benefits and limitations of using qualitative methods?
Using qualitative methods, you can get a deeper understanding of the data as you could form the questions you ask and thereby the data you collect. The drawback here is that you really can not get the same amount of data as you could with quantitative methods, without putting down lots and lots of time into it. However, this makes understanding why the data looks like it does much easier as you could easily ask questions such as "why?". Naturally, as the amount of data you collect is usually much less compared to quantitative research, it is hard to draw any general conclusions about a population.

Of course, by combining both qualitative and quantitative methods, you could get the best of both worlds as the benefits and limitations are usually connected to each other across the different methods.

måndag 19 september 2016

After: Theme 2

This week's theme has been a bit less philosophical as compared to theme 1, which makes it more my jam. I feel like it was easier to actually get a grip of this week's concepts since the questions were not so abstract.

I did not know of any of the concepts we discussed, prior to reading the texts and answering the questions, so I have definitely learned something this week. Prior to this week, for example, I did not know that enlightenment was something other than the Age of Enlightenment. I did know that the world started caring more for knowledge and sciences, however that a person could reach enlightenment was something I did not know, or perhaps had forgotten since I last studied history in school. It is very interesting how the rise of myths could in a way conjure such a big movement to search for true knowledge, "truth", using reason and dialectic, which is a concept that I did not know of either.

I never quite thought of two people trying to discuss with a mutual goal, such as trying to find out the truth about something. Normally, you just see debates on the television which is a completely different kind of discussion where one part tries to "win" the other one over with their arguments for their opinion. Relating to all of the debates going on in the US because of the presidential campaigns, it would be extremely interesting to see Clinton and Trump go up with each other using dialectic, not against each other in a discussion to solve a problem - and that without trying to heckle one another. Who knows, perhaps they could actually reach a solution to a problem and not just verbally bash each other?

Nominalism was to me the most difficult concept to grasp during this week's theme. Without reading my own blog post about what it was, I actually had a hard time remembering what it meant just now. Up until the seminar, I was not quite sure what it actually meant, but having discussions around the concept at the seminar made everything clearer. Before the seminar, I did not really think about how time and enlightenment really related to the concept, however a lengthy discussion around the subject did clear things up. We talked a bit about how the concept of God prior to the enlightenment could have been an image of a man above the clouds, making decisions about what would happen around the world. After enlightenment, as we cannot prove that a God exists, neither can we prove that a God does not exist, the whole concept of a person above the clouds is shattered as our view has changed. I believe this is one of the reasons why nominalism exists - universals and abstract objects do not remain the same as time passes, which makes it a point to reject them since they are ever-changing.


fredag 16 september 2016

Before: Theme 3

Select a research journal that you believe is relevant for media technology research. The journal should be of high quality, with an “impact factor” of 1.0 or above. Write a short description of the journal and what kind of research it publishes.

The journal that I have chosen is Web Semantics: Science, Services and Agents on the World Wide Web. The Journal of Web Semantics is an interdisciplinary journal based on research and applications of various subject areas that contribute to the development of a knowledge-intensive and intelligent service Web. These areas include: knowledge technologies, ontology, agents, databases and the semantic grid, obviously disciplines like information retrieval, language technology, human-computer interaction and knowledge discovery are of major relevance as well. All aspects of the Semantic Web development are covered. The publication of large-scale experiments and their analysis is also encouraged to clearly illustrate scenarios and methods that introduce semantics into existing Web interfaces, contents and services. The journal emphasizes the publication of papers that combine theories, methods and experiments from different subject areas in order to deliver innovative semantic methods and applications.


Select a research paper that is of high quality and relevant for media technology research. The paper should have been published in a high quality journal, with an “impact factor” of 1.0 or above. Write a short summary of the paper and provide a critical examination of, for example, its aims, theoretical framing, research method, findings, analysis or implications. 

From the Journal of Web Semantics, volume 9, issue 4 in December 2011, I have chosen the article "Semantically enhanced Information Retrieval: An ontology-based approach". The article tackles the issue with keyword-based search models, the way that we usually search for things using the web. One of the issues with keyword-based searching is that someone who wants to search for a specific topic might not know the terms needed to find what they need. A lot of research has been conducted regarding "conceptual search", understood as searching by meanings rather than literal strings which would make it easier to dive into an area you are unfamiliar with. The authors of the article takes a step further and proposed an ontology-based information retrieval model. Practically, the difference is that you could use a more natural way of creating your queries when retreiving information. 

I find the content of the article very interesting, as using natural language when searching is getting more and more popular for each version of operating systems that get released. As a real world example, Apple's virtual helper Siri uses an ontology (among other methods as well) model to understand what it is you want to search for when you ask it "what is the weather going to be like tomorrow?". Search engines and virtual companions such as Siri, Google Now and Microsoft's Cortana are getting better and better each day at understanding natural language, using machine learning and ontology-based models.

The main focus of the article is to bridge the gap between the information retrieval and the semantic web communities in the understanding and realization of semantic search. This is great as it will help people interact with computers, to more easily find what they want. The computers have the information, we just need to have a natural way of retreiving it.

One problem with the evaluation is that when it comes to the semantic web, there are not any standardized evaluation techniques as compared to the information retreival community which means that there is not a defined way to judge the quality of semantic search methods. The authors have thus conducted their evaluations based on a couple of user-centred methods that are hard to recreate.


Briefly explain to a first year university student what theory is, and what theory is not.

Theory is a way to explain, describe or enhance the understanding of the world using an empirical model. This means that we can, by experimenting and trying out what works and what does not - explain and describe a phenomenon. We can also use these models, the theories, to try and look into the future, by looking at how things have previously worked. An example could be that by throwing a rock, we can see and understand that it will eventually hit the ground again - because of the way that gravity affects the rock. Now, we could also assume that an object of similar weight would also behave the same way! We would be able to foresee, and thus in a way look into the future, that an apple would also hit the ground if we threw it like we did the rock. By doing this, we have created a very basic theory about gravity, and how it affects everything we see around us by pulling it towards the ground.

To understand what a theory is not, we will take help from Sutton and this text What Theory is Not. Sutton brings up five points to describe things that are usually mistaken for theories:
  1. References - Could be used as background to explain something, but references by themselves are not theories.
  2. Data - We can do lots of tests, but only by explaining why the tests end like they do, we can create a theory.
  3. Lists of variables or conducts - Sort of like the "data" point. We need explanations!
  4. Diagrams - We can easily represent a theory with diagrams, but we need the whole story why a diagram looks like that to have a theory.
  5. Hypotheses/predictions - These are basically guesses, not theories. Hypotheses tell us what is expected to occur, not why it is expected to occur.


Describe the major theory or theories that are used in your selected paper. Which theory type (see Table 2 in Gregor) can the theory or theories be characterized as?
The theory in the paper is that an ontology-based model for information retrieval can be used for information retrieval. The theory could be characterized as an explanation and prediction, as the results from the evaluation shows that their method of a semantic algorithm is more efficient than the traditionally used ways they evaluated. I guess it could also be characterized as design and action, as the theory encompasses their semantic method, which they use to conduct queries.



Which are the benefits and limitations of using the selected theory or theories?
Is it easier to describe the phenomena in a paper, which makes it easier for further papers to be written about the same topic, and just makes the whole concept easier to understand by realizing the theory. By having an explanatory and predictive paper, it is very easy to see what the paper is about, and easy to understand what the outcome of the paper could be. It it quite binary - either their proposed method  is better, or it is not. The limitations, I guess, would be that the testing may not always be the same. As the authors mentioned, the evaluation methods regarding this topic is not really standardized and it may be hard to recreate or compare this test with a new paper.




måndag 12 september 2016

After: Theme 1

The first week of the course has now passed, and we have been working with the texts by Plato and Kant, discussing mainly philosophical topics such as "What is knowledge?". I have never really worked with anything philosophical like this before, I believe, which makes this whole concept new to me. As I have no previous experience, it felt a bit weird to work with the given material. The texts themselves were rather hard to read and understand fully, and it took quite a lot of time just to try and read them. Then, thinking about and working with the material was okay I guess, however I prefer more "concrete" texts which I am used to reading when it comes to academic readings.

Nonetheless, I made it past week one and I have thought about things that I have never thought about before - very interesting. I am not sure that I could say that I have learned all that much, but I sure have been thinking in new ways during the week, for the blog post and the seminar. There were some parts of the week's material that spoke to me, and went along with my previous thoughts about the world and I guess perception. The fact that we all see different things while looking at the same object is something I have always thought is very interesting. We all have different backgrounds, come from different places in the world. Keeping that in mind while discussing different topics is something that I believe is very important to be able to understand the other parties.

During the seminar, we were divided into smaller groups to discuss the week's theme. We got the opportunity to go sit outside in the sun in "Borggården" which was a good place to discuss these philosophical questions. We talked quite a lot about hearing and seeing "through" your ears and eyes, not with them. I think the main topic we went on about the longest was about how our brains store impressions and things that we see, hear, smell et cetera. On that topic, I explained my reasoning about the loud noise that I wrote about in the blog post before the theme, how a person most likely would assume that a loud and heavy noise would come from something big like a truck, and not something the size of a small rock - even though the person had not seen the source of the noise. We talked about how these things work and how our minds work to try and "fill in the gaps" if we cannot see the whole picture at the moment.

fredag 9 september 2016

Before: Theme 2

Dialectic of Enlightenment
  • What is "Enlightenment"?

"Enlightenment, understood in the widest sense as the advance of thought, has always aimed at liberating human beings from fear and installing them as masters".

Enlightenment is gaining new knowledge, distantiation from the myths of old. This means that the people wanted to gain knowledge about the world, instead of just taking all sorts of preaching, often religious, as fact. By actually understanding the world, this empowered the people, making them the Enlightnened ones - the ones with objective "true knowledge" - no longer controllable by myths.
  • What is "Dialectic"?

Dialectic is a method of argument that is used to search for truth by using reason instead of arguments backed with ethos, pathos and logos as would be used during a debate where each side tries to "win" over the other. Dialectic, in contrast to regular debate, has the participants working together to reach one truth.
  • What is "Nominalism" and why is it an important concept in the text?

 Nominalism rejects both "universals" and abstract objects. By universals, what is meant are properties, characteristics and entities that objects have in common, while an abstract object is an object that exists beyond space and time. A universal could be the word "computer". The electrical device that is in front of me right now is described as a computer, however not all computers are the same - thus making the descriptive word computer meaningless according to nominalism. 

The importance of nominalism in the text is about how dangerous it can be to use abstractions and universals that are outdated and that with enlightenment new universals have to be formed.

  • What is the meaning and function of "myth" in Adorno and Horkheimer's argument?

Myth is "knowledge" that we can not objectively prove. This is the sort of knowledge that gave birth to the Enlightenment, as people set out to find true knowledge rather than believing everything that was said, the so called "myths" previously consideres as knowledge. 

"Myth is all forms of knowledge that existed before enlightenment"

The Work of Art in the Age of Technical Reproductivity

  • In the beginning of the essay, Benjamin talks about the relation between "superstructure" and "substructure" in the capitalist order of production. What do the concepts "superstructure" and "substructure" mean in this context and what is the point of analyzing cultural production from a Marxist perspective?

The marxist perspective describes society as a substructure and a superstructure. The substructure is everything related to production, everything that makes the world go forward, although unrelated to culture. Culture is where the superstructure comes in. The superstructure is everything that is not directly connected to production, for example religion and politics. Superstructure is describing production culture. Naturally, the superstructure can in many ways affect the substructure, how things work and how relations within production work. 

By analyzing cultural production, we may get a hint of how things may evolve in the future. For example, we know that a change in subculture usually takes a longer time to propagate in superculture compared to a change in superculture. The fact that a change in subculture actually initiates changes in superculture as well is a very important aspect.
  • Does culture have revolutionary potentials (according to Benjamin)? If so, describe these potentials. Does Benjamin's perspective differ from the perspective of Adorno & Horkheimer in this regard?

Yes, Benjamin argues that culture does have revolutionary potentials. For example, he brings up an example of a film that may promote revolutionary criticism of traditional concepts of art. This contrasts Adorno & Horkheimer's perspective which which instead means that technology has revolutionary potential.
  • Benjamin discusses how people perceive the world through the senses and argues that this perception can be both naturally and historically determined. What does this mean? Give some examples of historically determined perception (from Benjamin's essay and/or other contexts).

Benjamin means that perception can be both naturally and historically determined. This means that the same object could be perceived in different ways depending on the perceiver's historical background. The same object could by that definition mean many different things if it was perceived in the 1800's or the 2000's. Natural perception is more objective. It means that the object is what we see and hear without regard to our culture or previous impressions.
  • What does Benjamin mean by the term "aura"? Are there different kinds of aura in natural objects compared to art objects?

 "If, while resting on a summer afternoon, you follow with your eyes a mountain range on the horizon or a branch which casts its shadow over you, you experience the aura of those mountains, of that branch"
The quote above is an example of the aura of a natural object, a unique phenomenom that lets you feel the presence of an object, in a way. Regarding the aura of art objects, Benjamin describes it as its uniqueness, originality and authenticity.




fredag 2 september 2016

Before: Theme 1

In the preface to the second edition of "Critique of Pure Reason" (page B xvi) Kant says: "Thus far it has been assumed that all our cognition must conform to objects. On that presupposition, however, all our attempts to establish something about them a priori, by means of concepts through which our cognition would be expanded, have come to nothing. Let us, therefore, try to find out by experiment whether we shall not make better progress in the problems of metaphysics if we assume that objects must conform to our cognition." How are we to understand this?

Kato's theory states that instead of letting our cognition conform to objects, we could let objects conform to our cognition. At first, this makes for a bit of a confusing train of thought, but if you think about it for a minute or two - it does make sense. By letting our cognition conform to objects, we have to practice a posteriori, by empirically learning about the objects before we can "know" anything about the objects itself. Now, as we all know, this is not the case in real life as you may know answers to propositions even if you have no prior experience of the proposition itself. An often used analytical statement to demonstrate a priori is "All bachelors are unmarried". We do not need to go around to all bachelors, asking them if they are unmarried (and thus letting our cognition conform to the objects a posteriori, the objects here being bachelors) to know that they are - it is simply by analysing the proposition itself (and letting the objects conform to our cognition) that we a priori know the answer already. 

What Kato means here, is that technique of letting the objects conform to our cognition is quite handy when it comes to the studies and problems of metaphysics as we cannot empirically study these objects.


At the end of the discussion of the definition "Knowledge is perception", Socrates argues that we do not see and hear "with" the eyes and the ears, but "through" the eyes and the ears. How are we to understand this? And in what way is it correct to say that Socrates argument is directed towards what we in modern terms call "empiricism"?

I believe what Socrates means by saying that we do not hear and see "with" our eyes and ears, but rather "through" them, is that without our own perception, our eyes may receive photons on their retina and our eardrums may vibrate because of soundwaves without any further analysis of the information they have captured. We need our perception, our brain, our knowledge to perceive this information into something usable. Without perception, our eyes and ears are merely useless (well, they might look good on the face perhaps, but someone with perception has to know that for it to be useful).

Every now and then, you hear something new, perhaps a sound that you have never heard before. You have no definitive way of knowing exactly what it is, unless you turn your head and see what made the sound, but your minds most definitively tries to help you guess what the sound was even before you get a confirmation of the source of the sound. Perhaps it's a heavy and loud sound that you heard, then your mind would probably try to picture something large and not something small as a rock. These guesses are made because of your previous experiences, your own empiciral studies. You have seen and heard large objects before; trucks, trains, ventilation systems; and you know that they make these sort of loud noises sometimes, which gives your mind a "heads up" even before you know what made the sound.