Quality attributes for serious games – questionnaire

I’m now wrapping up my PhD research. The last stage of my work is focused on the development of tools to support the design and development of serious games. More specifically, I am collecting a set of requirements and quality attributes for serious game to develop (and evaluate) a reference architecture for service-oriented serious games.

Why a reference architecture for serious games, you ask? Because it can help serious game developers in jump-starting their projects with a blueprint of how the different pieces can be put together.

Are you a game/serious game expert? Do you have 25 minutes to spare? Then click on the link below to fill in the questionnaire:

The questionnaire will be available until 18/02/2016.

Please spread the word!

ICALT2015 wrap up: striving for real impact

The 15th IEEE International Conference on Advanced Learning Technologies (ICALT2015) happened last week, from July 6-9 in Hualien, Taiwan. I had the opportunity to go and present my paper on the service-oriented architecture framework for serious games (and the paper got a best paper award! :) ).

In this post, I try to provide a short wrap-up of the conference and the take-away messages. It is not an easy task, since it was a four-day event, with 6 keynotes, one panel discussion and a total of 182 accepted papers and posters in 18 tracks. Nevertheless, if I had to summarize the conference in one single phrase, it would be Mike Spector‘s message in his keynote: we need to make sure that the research we do finds its way to practice and policy, and that it results in real improvements.

The same concern had already been voiced by Miguel Nussbaum in the opening of the conference, when he noted that teachers were not being prepared to integrate technology in the classroom. How to integrate conventional and digital resources? And how to use technology to train children for creativity, reflection, meta cognition and building meaning, instead of focusing on rote learning and repetition? Our technological innovations are still too concerned about “how to teach”, instead of focusing in the very important question of “what to teach”. What can be done to change this?

These questions remained open as the conference closed, almost as a challenge for the researchers gathered there. But, as it was pointed during the panel session, changing “what to teach” takes longer than “how to teach”. In a research environment that encourages a high output of papers with novelties, there is no time for researchers to focus on real impact — which takes time (sometimes years, as Elliot Soloway and Cathleen Norris noted in their keynote), replication studies (which nobody wants to do) and meta-analyses (which can be hard when nobody reports their findings in a way that allows this nor publishes their data openly).

There is clearly a problem here, but nobody really knows what to do with it. Whose responsibility is it to change? Senior researchers have more leverage to strive for longer projects than juniors who need to build a career, but it certainly is not as simple as pressing a button. I heard a similar message in ECTEL2013, and two years later the Technology in Education field is still struggling to get its researchers away from the shiny new technology and closer to real impact.

Sessions and papers

There were many tracks happening at the same time, so I could only cover so much. I made an effort to have a look at the tracks about games for learning (Track 4), adaptive and personalized learning (Track 2), collaborative learning (Track 5), big data and learning analytics (Track 7) and affective computing (Track 10).

There were some interesting ideas and applications, and I tweeted about the ones I thought were most interesting (check the hashtag #icalt2015 for my and other people’s highlights). I will not detail any of those here, but maybe I’ll write separate posts about the ones that I had the chance to read the paper, discuss and understand better.

In any case, it was very good for me to go around in different tracks, since I was to be able to make contacts with other people working in applications that could be incorporated in my SOA framework for serious games, or developing games that could benefit from my work. More about these to follow!

Using activity theory in the study of educational serious games

Here I describe a model for analyzing serious games based on activity theory. The model can also be used for serious game design as a tool to evaluate prototypes. This post is a short summary of an academic article published in Computers & Education (link in the text).

An important part of my PhD research has been a theoretical one: I have been trying to understand the internal structure of serious games, particularly how gaming and pedagogical elements are connected to each other in a game to reach its educational goals. This objective supports my overall goal of building a framework to help serious games designers and developers in a very practical way, ultimately making it easier and cheaper to produce serious games.

Recently, I and my colleagues published an article in the journal Computers & Education, titled An activity theory-based model for serious games analysis and conceptual design, in which we describe a model for serious games analysis and conceptual design. The article is also available in this website, in the Research & publications page (author’s version).

What is this ATMSG model about?

The main idea of the ATMSG model is to support a systematic and detailed representation of educational serious games, depicting how the combination of serious games elements supports pedagogical goals.

As the name indicates, the model uses activity theory as underlying theoretical framework. Activity theory is a line of research initiated in the 1920s and 1930s by a group of Russian psychologists. It studies different forms of human practices and development processes, providing a model of humans in their social and organizational context 1.

Activity theory provides us with a way to reason about the relationships between serious games components and the educational goals of the game, considering the game as part of a complex system that involves at least three activities: the gaming activity, the learning activity and the instructional activity.

The figure below illustrates the model.

The ATMSG model
The ATMSG model. Source: doi:10.1016/j.compedu.2015.03.023

The hierarchical structure of the activity also gives us the ability to change the focus of the analysis to different levels of detail. In ATMSG, each activity is broken down into a sequence of actions mediated by tools with specific goals. Each of these items is called “serious games components”, that is, the “concrete” pieces of a serious game that compose the game play over time, e.g. characters, tokens, tips, help messages, etc.

A unified taxonomy of serious games components

We used the ATMSG model described above to reorganize existing taxonomies of learning, instruction, games and serious games (such as the game Ontology Project, Learning mechanics — Game Mechanics (LM-GM), GOM II, Bloom’s taxonomy, Gagné’s Nine Events of Instruction, ARCS Model of Motivational Design), into a unified vocabulary. The taxonomy is organized in a tree structure in which items are classified according to the activity to which they belong, and, within the activity, categorized as actions, tools or goals. It aims to aid in the identification and classification of components according to their characteristics and roles in the game.

You can check the full taxonomy in the original article (or in the author’s version of the article).

Serious games analysis with ATMSG

We defined a four-step process to guide users in analyzing existing serious games or prototypes. These steps take the user from a high-level understanding of the activities to the concrete components that implement those activities. The user identifies game components with the help of the taxonomy of serious game components.

  1. In the first step, the user describes the main activities involved in the activity system and identifies their subjects and corresponding motives.
  2. Then, to help in the identification of the components of the serious game, the user produces a diagram that represents the game sequence in a rough timeline.
  3. Subsequently, the user identifies components related to each node of the game sequence. Each event in the game is decomposed into its actions, tools and goals.
  4. Finally, the user groups each set of actions, tools and goals. For each of those blocks, the user analyzes how these components go together in the game and explains how the usage of such components and characteristics support the achievement of the entertainment and/or pedagogical goals of the game.

You can see an example here: an ATMSG analysis of the game DragonBox (PDF, 343 kb).

Will ATMSG be useful in my project?

The ATMSG model can be a little bit complex and it has a steep learning curve to users with little or no experience with games 2. For this reason, it is recommended as a tool for the analysis of existing serious games when a thorough understanding of the characteristics of the game is needed, for example when adapting games for use in specific learning settings, or when detailing the analysis to identify and catalog learning patterns.

It can also be a helpful tool when evaluating game prototypes during the design process, providing a way for the design team to understand how well their game ideas can support the desired learning objectives before any code is written.

Interested in this topic? Don’t hesitate to contact me!


M. B. Carvalho, F. Bellotti, R. Berta, A. De Gloria, C. Islas Sedano, J. Baalsrud Hauge, J. Hu, and M. Rauterberg, “An activity theory-based model for serious games analysis and conceptual design”, Computers & Education, Volume 87, September 2015, Pages 166-181, ISSN 0360-1315,

You can find the author’s version of this paper in my Research & publications page.


1. The best book I have read with a good and understandable explanation of the theory was “Acting with Technology”, by Victor Kaptelinin and Bonnie A. Nardi, particularly Chapter 3.

2. We evaluated this in a user study comparing ATMSG to LM-GM. The dataset and files for replication studies are released as Open Source and public domain.

Building a SOA framework for serious games – ICALT 2015 presentation

In this post, I describe my work in defining a service-oriented architecture (SOA) framework for serious games. It is a summary of the paper that was presented in the 15th IEEE International Conference on Advanced Learning Technologies (ICALT2015) in Hualien, Taiwan. This work received a best full paper award in the event.

If you are a serious game designer/developer and you are interested in this framework, please check our tutorial in the International Conference on Entertainment Computing (ICEC) in Trondheim, Norway, on September 29th, 2015.

My PhD research is dedicated to find ways to make the design and development of serious games cheaper and more accessible, while maintaining quality.

One solution we have been trying to implement is to encourage the use of service-oriented architectures (SOA) in serious games. SOA is widely used in software engineering, and it can also benefit serious games development by reducing costs and time to market, while at the same time allowing customization in a relatively easy and reconfigurable way.

A first step towards SOA for serious games was the SG Services Catalog, which is maintained by the Serious Games Society. There, developers can find a curated list of existing services that can be used to add functionalities to their games.

But it is still necessary to identify which would be the relevant and usable serious games elements and how to interconnect these elements in the game. In other words, if a group of serious game developers wants to take advantage of such services, where should they start? If developers want to develop services that can be used by different games, which services should they develop? Which interfaces should these services have to allow for a high degree of reusability?

In the paper, we used the ATMSG model as a starting point to identify candidate serious games components that could be developed as services. From ATMSG’s taxonomy of serious games elements, we collected a number of relevant items. The criteria for the selection were (a) relevance for the effectiveness of educational serious games, and (b) possibility of reuse across different games and learning domains, at least within the same game genre.

Once the relevant educational serious games components described in the ATMSG taxonomy were identified and collected, we regrouped them according to their domains and functionalities, so that we could identify the clusters of reusable components to be implemented as services in a SOA framework for serious games.

The functional domains

We identified eight functional domains, and for each domain we identified a list of candidate functionalities that could be implemented as services. These are shown in the figure below.

Functional domains and candidate services
Functional domains and candidate services

A complete description of these domains and services can be found in the paper (the author’s version is available in this site in the Research & publications page).

The presentation


Evaluation of serious games models

As I posted earlier, part of my PhD research is dedicated to the theory behind how gaming and pedagogical elements are connected to each other in a serious game to reach the game’s educational goals.

I am now running a questionnaire in which I try to compare two models for serious games analysis, to collect experts’ perceptions on how usable and how effective each of these models are for evaluating existing serious games.

The description of the models and the write-up of this evaluation will (hopefully) be published in a journal article soon :)

Can you help me in the evaluation?

Then visit to read more about the questionnaire.

A small compensation will be offered to the first 13 participants to complete the questionnaire before March 18th, 2015: a 20€ gift voucher for either Google Play or Amazon.

“What’s your PhD about?”, one year later

One year and a few months ago I wrote a post trying to explain the topic of my PhD. I would like to give a small follow up on how that evolved.

To recap, I wrote that my topic was developing a tool to automatically capture and represent over time the emotional states of players in a collaborative Serious Game. The idea was that the tool would be generic, so that it could be used independently of the topic of the game or type of gameplay. It would be developed as a service, to facilitate its integration not only with the game, but also with other services (e.g. user profiling, social features…).

It is still in my plans to develop this service, but I’ve been focusing on more general aspects of the development of serious games so far. This will, hopefully, pave the way to the creation of an emotions recognition service that can be truly useful and easily integrated to several types of serious games. Instead of considering the service as my end product, it will be more like an example implementation to illustrate a vision of what serious games could be like.

So far I’ve been working on this vision in two main fronts, one more theoretical and the other one more practical:

  • The theoretical one is a model to understand the internal structure of serious games, particularly how gaming and pedagogical elements are connected to each other in a game to reach its educational goals.
  • The practical one is an investigation of the benefits and drawbacks of using Service-Oriented Architecture in the development of serious games. Why is it a good thing, and what needs to be taken into consideration when creating a game from pieces developed by different teams, in different places, with different goals?

My work from now one is to match those two facets of the problem, and see how can I use the model to create a framework that can help serious games designers and developers in a very practical way. My goal is that this framework will help them connect the pieces (that is, the existing services for serious games), ultimately making it easier and cheaper to produce serious games.

Ubuntu 12.04, Banco do Brasil e Certificado A3

(This post is in Portuguese, as it is only relevant to readers in Brazil.)

Hoje consegui colocar o Ubuntu e meu eCPF da Certisign para conversarem com o Banco do Brasil, finalmente. Até então, eu estava tendo que usar o dual boot para o Windows só para pagar a conta do cartão de crédito…

Aqui está o caminho das pedras, para quem estava tentando fazer a mesma coisa e não conseguia. As instruções só funcionam para o Firefox.

1) Certifique-se de que a sua leitora funciona no Ubuntu. A minha é uma OMNIKEY que funcionou sem precisar instalar nada. O comando “dmesg” vai dar a informação de se a leitora está funcionando ou não.

2) O meu cartão da Certisign precisa do software da SafeSign para funcionar corretamente. O bom é que tem uma versão para Linux disponível aqui, com instruções e tudo. No meu caso, não precisei me preocupar com nenhuma dependência, instalei direto o SafeSign 64bits. Segui todos os passos explicados na página, inclusive para instalar o módulo no Firefox.

3) Era para funcionar, mas não funciona. Fuçando um pouco, achei a informação de que a applet do BB procura uma lista de módulos da leitora de cartão em uma determinada ordem. Se o computador não tem um dos arquivos com o nome exatamente da maneira como o BB quer encontrar, o Banco do Brasil vai dar o erro de “Leitora/Token não encontrado” ou algo semelhante. O SafeSign instala o módulo “”, enquanto o BB quer o arquivo “”. A solução é simples, é só fazer um link simbólico com o nome que o BB quer para o arquivo que de fato existe. Assim:

$ sudo ln -s /usr/lib/ /usr/lib/

É claro que só vai funcionar se essa for de fato a localização do arquivo instalado pelo SafeSign. Pode ser que seja diferente no seu caso.

Pronto! Atualize a página do BB e tente novamente, que agora o applet do banco deve encontrar o módulo da leitora e exibir o prompt da senha do certificado.

Why I abandoned Facebook

One of these days, a friend and I received a random message from some random person on Facebook, asking us to check his music out and give him our opinion. That message bothered me to no end, as my inbox in Facebook is a space that I am used to have reserved for real friends, not for Spam. Facebook does have its “Spam” folder (the Other box), and most of the junk just goes there directly, and I never look at it. But then I remembered reading a few months ago that Facebook was testing a service in which a person would be able to pay a fee to have her message delivered directly to the Inbox folder. And surely enough, the help area on Facebook says: “Additionally, someone you’re not connected to on Facebook may pay to ensure their message is routed to your inbox instead of your Other folder.” [here, where it explains how filters work]

So, how fun is this, that I get this messaging system with a Spam filter that I’m not able to control, and which anyone is able to bypass it by paying; not me, mind you, but paying Facebook.

If you’re not paying for it, you’re not the customer. You’re the product being sold.
If you’re not paying for it, you’re not the customer. You’re the product being sold.

Then, the very next morning, I read another piece of news saying that they are now testing autoplay video ads in the newsfeed (also if the advertiser pays extra, of course).

So that was it for me. The decision was made: I had to get out of Facebook.

It’s my data!

After I decided that I no longer wanted to actively maintain my profile, the first step was to back up my data. They do offer an option to archive the files, which comes with your messages and posts, but no emails from contacts (just the name list) and no comments. It required some workarounds to get all the comments that were important to me (using Evernote clip tool and saving as HTML in my hard drive), but eventually I managed to get all my 7 years of personal communications out of their iron grip.

Saving the email contacts from the contact list requires more patience and an Yahoo Account. They refuse to give you the list of emails the easy way, but it can be done.

Finally, the most painful step of all is cleaning up the posts. I wanted to clean up everything, as I am keeping the account, albeit abandoned, but I would rather not have all my information there for them to use. It is the most painful thing to run Greasemonkey scripts that kinda work, refreshing when it gets stuck. Think about the sheer amount of junk, likes, posts, photos and what not that I collected over those 7 years… It took me some 5 days to delete everything.


Bottom line: I want my communications to belong to me. I want it to be easy to filter out who sends me messages or not. I want to be able to export the messages that are important to me and to delete those that are not. I want to be able to get my contacts list to whichever service I desire. I don’t want to see ads that autoplay when I am trying to check my nephews’ photos.

For this reason, I abandoned Facebook. I am now moving to open alternatives, testing out Friendica, and Diaspora*. If you are curious about them, find me there :)

Call for Papers: Second International Workshop on Collaboration and Gaming (CoGames 2014)

As part of The 2014 International Conference on Collaboration Technologies and Systems (CTS 2014), In Cooperation with ACM, IEEE, and IFIP (Pending)

May 19-23, 2014
The Commons Hotel, Minneapolis, Minnesota, USA
Submission Deadline: December 30, 2013
Submissions could be for full papers, short papers, poster papers, or posters

For more information:

Scope and Objectives

Development of video games is by definition a multi disciplinary process involving several professions, ranging from artists to engineers.  The AAA game titles produced today require rather large teams with a high level of competence (technology, programming, networks, architecture, etc.), creativity and skills.  Compared to traditional software development, game development is characterized by rapid changes of hardware, high performance requirements, and software requirements that are unstable and hard to predict.  Video games are also used for other purposes than pure entertainment, e.g., for education, training, exercising, and simulation.

In the current trends, game developers are focusing more and more on games where players must collaborate to achieve goals in the game.  Collaborative games introduce challenges for game developers to handle technical issues, performance issues, network issues, distributed environments, sharing of information, and heterogeneous networks and devices.  Further, collaborative games open new areas and applications for games to be used for new purposes that can benefit from more than just fun. Creation of these “serious collaborative games” adds additional challenges including matching of content and game activity and modeling of player proficiencies.  As players expect that games can be played anywhere, the integration of mobile gaming, hand-held and online-gaming on smartphones, consoles and tablets is becoming ever more important.  This integration introduces new challenges and leads to new opportunities.

See more at:

Liberating annotations in Personal Documents from the Kindle

UPDATE: It seems that now the notes from personal documents also show up in the Your Highlights page. So there is no more need for all the tricks below. I’ll keep this post here just for historical reasons anyway.

UPDATE 2 (01/04/2014): I’ve been making more tests and it looks like only my Paperwhite started uploading the notes from Personal Documents, after the latest firmware update. However, that didn’t happen for any personal document, just to one book specifically. I’m still investigating what made that book special in Amazon’s view… But for now, it seems like the post below is, after all, still relevant.

Now that I am using my Kindle to read academically, I’ve realized that I will need to make a very good use of the Notes and Highlights feature. I am still working out a system on how to make my notes more useful in my reading (which will become a post on its own), but before I put such a huge effort in note taking, I want to make sure that the notes will be easily retrieved and safely stored outside of the Kindle.

For books that have been bought from Amazon, getting the notes out of the Kindle is not a problem at all. Amazon has a page that shows all your notes from all your Kindle books, taken in any of the devices that you have connected to your account (you have to be logged in to Amazon to be able to see it). You can just copy and paste from that page and that’s it. Or even better, you can clip it to Evernote.

The problem is when you are annotating documents that have not been bought from the Amazon store – what Amazon calls “Personal Documents”. For those documents, notes and highlights can be synchronized across devices, which is very handy, but unfortunately they are not uploaded to the magic page with all your highlights.

“My clippings.txt”, almost a solution, but not quite

The “My clippings.txt” file is great and there are amazing tools that extract the notes from this text file and allow you to export them to many formats (this one is my favorite).

For those who don’t know this, each time you take notes in your Kindle device (not in the desktop or mobile apps), the device writes the note to that text file. It is a one-way operation, though; more like a back-up. If you delete a note from the book, it will not be deleted from the “My clippings.txt” file. If you edit the file and put it back in the Kindle, it will not carry your edits back to the book annotations.

But there are two problems, at least for me, with relying on this text file for my note-liberation needs. One is that this file is device-dependent, so the notes that I take while reading a Personal Document in another device will not show up on my Paperwhite “My clippings.txt”. The second problem is that the Kindle reading apps (for desktop and mobile) do not produce such a file (otherwise it would just be a matter of merging different files and exporting them).

In practice, this means that any notes that I take while reading my Personal Document from my Android tablet will appear on my Paperwhite while I’m reading the book, but they will not be exported in text format to outside of the devices.

.mbp and .empb, the annotations file

As I said above, the “My clippings.txt” file is merely a backup of your notes. The notes themselves are stored in a “side” file, named exactly like the e-book file itself but with a different extension: mbp, which is a file that holds Mobipocket annotations (Mobipocket was bought by Amazon and now it has basically disappeared).

These files are not text files, and although I haven’t found any software that can edit them directly (except from the Kindle for PC itself, which opens them without troubles together with the original book, but doesn’t allow for bulk exporting of all the notes), there is a software that some nice soul posted online that is able to extract the mbp notes to text.

(One tip for those using the MBP note extracter above: edit the file mbp.parameters.txt and remove the “#” from the beginning of the line that shows BKMK_PAGE_FACTOR=150 – as this will write the location of the notes to the file.)

But there is one problem here. Up until late 2012, these files were just as Mobipocket had created them, and using tools such as the one posted above or software like Buzzworthy, you could simply extract the notes and live happily ever after. But since then, Amazon started encrypting the notes (file extension .mbp1 or .embp or .mbs) and those software no longer work. Not fun.

The solution to this issue is posted on Buzzworthy’s website: you have to downgrade the Kindle app in your mobile/tablet in order to have your notes in the non-encrypted format to be able to read the .mbp file.

Wrap up

So, to sum up all that is written above, this is how I am managing my notes in my Personal Documents in all my Kindle apps/devices (one Kindle Paperwhite, one Android Smartphone, one Android tablet, and two Kindle for PC installations):

1) I make sure my Personal Document is synchronized across my devices by using “Send to Kindle”. Although it is possible to add personal documents directly to my device(s), I use Send to Kindle, so that the document and the notes are synchronized across all the devices (Cloud and Desktop readers, unfortunately, do not sync personal documents). If you are sending the document via e-mail, make sure to add the word “CONVERT” to the subject of the message.

2) To extract the notes I take directly in the Paperwhite, I export the “My Clippings.txt” file to Clippings Converter and from there to Evernote. And because this is the simplest method, I am trying my best to do all my annotations there. But if I need to make notes on the phone or the tablet, it’s not a big problem. Read on.

3) I keep the Kindle App in my Smartphone with an old version of the software (namely This way, I can get the unencrypted .mbp file out of the phone if I need, and as I sent the document via Send my Kindle, this file will have all my notes, even from the other devices with newer Kindle apps. I just have to make sure that the notes have been synced before transferring the file from the phone to the computer (via USB, Dropbox, Bluetooth, whatever works). The .mbp files will be inside the “kindle” folder, together with the books themselves.

4) If I need to extract those notes from the .mbp file, I use the handy software mobipocket notes file extractor to get them safely out of the Kindle ecosystem. Setting the “BKMK_PAGE_FACTOR=150” preference makes the software print out the locations of the notes as well.

5) If I need to see the notes in my Desktop, I copy the personal document and the unencrypted .mbp file manually from the phone to the Kindle for PC, by simply dropping those files in the “My Kindle Content” folder in the computer. Then I can open the book and all my markings in my Desktop are there (and this is very useful while cleaning up my notes on Evernote Desktop, using a proper keyboard, and with the book and notes open right in front of me).

I tried adding notes in the desktop and sending the .mbp file back to the phone to see if the new notes would sync and show up in the other devices, but unfortunately this didn’t seem to work. I could see the note in the phone, but it was not replicated to the other devices. It only became visible in the Paperwhite once I edited it on the phone directly. So I consider the installation of Kindle for PC as read-only – I never edit the notes there so that they won’t get lost.

So that’s it. It’s a lot of pain to go through just to get your Personal Documents’ notes out of the Kindle. It bothers me a lot to think that this is Amazon’s way of pushing us towards buying our e-books from them so that we can enjoy the much better synchronization possibilities that Amazon books have in that matter. I have always preferred to buy my books from Kobo and convert them to .mobi using Calibre to send to the Kindle, but I don’t know if I want to go through all this trouble all the time… I can’t wait for Open Annotations in epubs to become a reality and to have some impact in Amazon’s way of dealing with this. After all, those are my notes and I want to make sure they are easily readable!

The next post will be more about the quality of the notes and how I am planning to use them in aiding me in reading critically using an e-book reader.