For an Effective Mobile Content Strategy, First Understand Your Users

Any good CMS system worth its salt should be able to support proper mobile devices through the platform specific targeting of content and style elements. However, simply making your page layouts and stylesheets mobile friendly may not be enough to satisfy your users.

Different Ways of Providing Mobile Content

There are different ways of supporting your users on the move, including*:

  • RSS News feeds
  • Mobile friendly web pages – navigation as per your current site structure
  • Downloadable eBook/pdf – for Kindle/Tablet users
  • Mobile friendly site – both pages and content structure optimised for mobile
  • Mobile friendly site in an app – installed like a mobile app, works like a website (normally HTML5)
  • Framework based mobile app – e.g. PhoneGap – makes native phone/tablet functionality (e.g. GeoLocation, local storage) available to mobile web app (normally HTML5)
  • Native mobile app – implemented in native language for each device – e.g. iOS, Android

* (you can find out more from my previous post Mobile Apps for the Uninitiated)

Broadly speaking, this list gets more expensive as you go down it, but with a potentially much richer and deeper ongoing engagement with your users.

None of these approaches covers all eventualities – there is a cost/benefit for each. For example, RSS feeds provide users with easy access to news items from your web presence, typically with very little extra setup cost. At the other end, native apps provide the smoothest experience, and the possibility of an excellent push content channel. However, you can’t push content to users unless they download the app, and they will only download an app if it supports an activity they want or need to do.

Different Users, Different Uses

Users may fill their time with research type activities when commuting to and from work on the train, using their smart phone or tablet. They may wish to get access to material relevant to their job at their desktop, to your contact details on the move, check their user account, or outstanding orders at lunchtime at their desktop…and so on. If you hope to have a clear idea of how to service their requirements, then you need to clearly model the key user journeys you want to support, otherwise you are not making their lives easier. Different kinds of users engage with different kinds of content, on different platforms, for different reasons, in different situations.

There is no one size fits all approach to reusing content on mobile platforms, beyond the basic exercise of providing content. Whilst this basic exercise is better than nothing, this is unlikely to make all, or even any, of your groups of users engage more deeply with your content.

The Right Approach for Your Users

It may be that you have something to offer your users that means they are keen to engage on an ongoing basis – for example, if they order your goods regularly, or if they use real-time information, or if there is a professional or interest based reason for frequent two way communication. In such cases, you will most likely have a strong case for developing a mobile app.

If you find that your users just want your news on an occasional basis – in which case, a mobile friendly news page, or an RSS feed may well suffice. If your users tend to check you out on the move, then your entire site navigation, along with the page content, will need reconsidering in light of issues such as:

  • how do and should people access your content
  • how should you signpost the most important activities in the limited screen space of a mobile device
  • how should you keep the sequence of activities short and easy to manage on a mobile keypad

Reuse of Content

Only when you understand the likely patterns of engagement of your users will you be in a position to judge how you may be able to reuse your content. Although the challenge of how you will push that content out technically is not to be underestimated, that is just a side issue compared to the organisational and human complexity of establishing and appropriate authoring process.

Reuse may require Rewriting

You cannot expect content designed for the written page to be a good fit for mobile devices and vice-versa. You may be able to give much more concise, interactive and context-sensitive content on a mobile device, which can be made aware of its environment to some degree, as compared with a desktop browser. If you are considering reuse, then you need to set up an appropriate workflow that will segment your content into elements that are appropriate for each platform. In your CMS, this may mean that you have separate précis, body and imagery for each distinct platform. You will no doubt wish to flag which content may be permitted for use, or blocked from use for each platform as well. You may want the structure as well as the content to be pushed into the mobile device.

Mobile Apps as a Content Delivery Platform

If you are in the fortunate position of having a compelling reason for deep two-way engagement with your users – perhaps as a membership or professional body, or as a charity – then it may make sense to consider developing a mobile app as a content delivery platform. The advantage of this is that you can give a bespoke engagement with content which can, if implemented correctly, be updated regularly without distributing a new app. Users can then engage with content on the move and then access it subsequently without having an internet application. In effect, you can have a targeted push channel into your user base, as well as an effective platform for two way communication.

Creating an effective mobile content strategy is complex, though it offers great opportunities. Only by understanding the needs and behaviour of your users can you hope to succeed in achieving your organisational aims.


Some key considerations for achieving a successful intranet

Perhaps I should do myself a big favour and keep a set of generic documents to stuff into proposals – but I don’t! However much I try to copy and paste what has gone before, I still end up refreshing my thoughts for each new proposal. The following are some thoughts I put together for a recent intranet proposal – top level project issues that I think merit attention at the beginning of each new intranet I get involved in.

Informing and engaging users
There are key elements of an intranet that must be realised in order to achieve core business objectives, such as document repositories, content searching, user permissions management, news and so on. At the same time, the users should want to come back, the intranet should become the “go-to” application of choice for many of their key daily tasks. In other words, there must be a balance between informing users and engaging users.

Intranet is a process, not a system
Clearly an intranet is a system, but it is also more than that – a vehicle for changing the way that people work, for making their lives easier and the company more effective and efficient. As users start to engage with the intranet, they will find new ways of engaging with the knowledge provided by the organization. The ultimate goal of a good intranet is to become an effective knowledge sharing platform – not just between the company and the users, but also between users.

You can’t know what will work in advance
Some ideas that seem excellent on the face of it don’t work in practice due to unforeseen or unforeseeable circumstances. Often ideas that we seek to translate from other settings – such as social media – fail in a corporate environment. For instance, my agency have implemented highly engaging attractive features such as noticeboards for car sharing, which have failed to take off. On the other hand, small elements of functionality such as ‘Who’s locking up’ and ‘Who’s out of the office’ add an unexpected value, and are picked up with enthusiasm by users.

Early involvement = happy users
Because an intranet is a system that impacts upon users’ everyday lives, they need to have confidence that it will work for them. If the system feels imposed from above, it is likely to meet resistance in rollout. If, on the other hand, users are involved early and often, it is easier to identify what works and what doesn’t work.

Ideally the development process should include a product owner from the client, as well as representatives from key user groups. By so doing, it becomes possible to identify what works and what doesn’t work early on, as well as creating a group of enthusiastic advocates who will help smooth the way for eventual buy in by the user base as a whole.

80% of value from 20% of functionality
With intranets, as with many any other kind of technology, 80% of the value is realised using 20% of the functionality. By releasing key functionality early in the development cycle, it becomes possible to:

  • achieve early wins
  • demonstrate successful progress
  • build user confidence in the solution
  • ensure the most critical functionality is the best tested

In Conclusion
If pushed, I will always prefer a methodology that involves frequent releases and lots of user input. For reasons of internal politics, as well as the paperwork required for formal budget applications, it is often not possible to wholeheartedly adopt an agile approach in full. However, I think it is a matter of focus here – are you more bothered about ticking boxes, or about maximizing the amount the business objectives achieved with your budget? An agile mindset is all about getting useful work done as soon as possible, about satisfying pressing business needs, building confidence, and flushing out issues so they don’t stack up at the end of the project. Agile does not mean chaos – despite prejudices to the contrary. Agile to me means acting in light of the facts, and illuminating the facts through action – once the research has been done and the groundwork has been laid.

Turn Twitter Searches into Research Archives using Google Reader

Ok, so you are plugged in to endless Twitter feeds to get the low down on what’s going on – but how do you turn this into a useful research resource?

You can keep a track of what is being tweeted and collect it in an archive by using a combination of the Twitter advanced search and Google Reader. Using Twitter’s advanced search allows you to filter out what you don’t want and only keep what’s of interest. Using Google Reader means the captured search feed is then archived for later review – hence building a knowledge base.

How to do it:

  1. If you don’t already have a Google account, set one up
  2. In Firefox (preferably) go to Twitter Advanced search – – and set all your search criteria, don’t forget to choose language – change the number of results to 50
  3. Iterate to perfect – search, look at the results, use the back arrow to return to the form with the details pre-filled, then search again
  4. Once you’ve got a results list you like the look of, click on the RSS (Feed for the Query) link on the RHS – if you haven’t already chosen your default, you will be asked what to use to subscribe to the feed – choose Google
  5. The Google Reader interface will be loaded and you will be asked for confirmation for subscribing

Now you will have an archived, updated RSS feed with your Twitter search in it, that you can review whenever you want – you can even then use it as the raw material for a Yahoo Pipe, or other feed consumer.

Tacit Knowledge – the Real Challenge for Knowledge Management

The concept of tacit knowledge is as slippery as it is critical to the success of knowledge management. Omnipresent yet unacknowledged, tacit knowledge is most easily grasped by opposing it to the more familiar kind of knowledge we codify in documents, diagrams and procedures – explicit knowledge. By contrast, tacit knowledge is hidden, yet at the same time present in everything we do and everything we talk about. What I am talking about is what is often called “savvy” – savoire faire (know how) and savoire vivre (life knowledge).

Tacit v. Explicit Knowledge

According to the philosopher of science and social relations, Michael Polanyi, tacit knowledge is always employed to “attend to” the realisation of some more explicit, it can never, by it’s very nature, be the focus of our attention. Think of it as being like a pair of glasses – we can see through them only for as long as we don’t look at them. As soon as we scrutinise our tacit knowledge and make it explicit, it is transformed, it ceases to be of use. We might implicitly know how to ride a bicycle, but as soon as we try to explain it in detail, as soon as we consider it as an explicit object of interest, we start to fall off.

Given that knowledge management aims to promote the sharing of knowledge in order to facilitate organisational learning, it cannot afford to ignore tacit knowledge. However, trying to make the tacit explicit is like trying to focus on your peripheral vision whilst keeping it peripheral. Tacit knowledge is non verbal – as soon as one starts to classify or systematise it, it becomes quite another kind of knowledge, it transforms from “knowing how” to do something to “knowing that” something is done.

Recognition of Context

The late great theorist of mind and learning Gregory Bateson cautioned against confusing different logical types of knowledge and learning. We need to recognise the limitations of analytic language, that taxonomic schema that simply cannot address or invoke the kind of understanding of the world that is wrapped up in our physical habit in our style and manner of getting things done. Above all else our capacity to behave appropriately relies on our ability to recognise the different contexts in which we act – the same behaviour might be expected in one case and entirely inappropriate in another: in a job interview, chatting with a friend, playing, fighting, on a date and so on. As the easy transition between playing and fighting shows, discerning context is no easy matter if an tacit understanding of context is not shared by those involved – if there emerges a feeling of tension and indecipherable “bad vibes”.

Why not Just Stick to the Sharing of Explicit Knowledge?

So, if it is so difficult to deal with, why not just accept the limitation and forget about it? For Bateson, as for the philosopher Ludwig Wittgenstein and current experts in the field of cognitive linguistics, such as George Lakoff, language acquires meaning only in the context of physical practice. The way that language manages to mean so much with such a parsimonious use of words relies on it’s ability to invoke the much bigger horizon of knowledge wrapped up in our bodies and their habits.

You can categorise language and schematise process as much as you like, but such strategies can only be effective when the various parties using explicit knowledge – information – already have, or manage to achieve, an implicit agreement about the behaviours that go with the language. For example, when the dry language of scientific proceedings include mention of using a mass spectrometer or a titration, the meaning thereby shared with a scientific reader relies on author and reader having a shared bodily experience of the procedure, or something like it. If a non-scientist reads such a report, the meaning shared is of a more limited kind, the reader remaining firmly on the outside.

The Role of Narrative and Creativity

How then do we pass on that which cannot be directly named or indicated? In a word, creativity! Here Bateson suggests we recognise the vital importance of art and narrative: parables, metaphors, similes, allegories, dialogues and reenactments that may be used to bring one another to the moment of learning and transformation, that may be used to hint beyond the words.

Stories are, of course, one of the oldest ways of passing on understanding, or perhaps one should say wisdom. At the same time they are noticeably absent from our age of factoids and real time “knowledge transfer”. So, the key challenge here for knowledge management is how to occasion such passing of wisdom, how to promote the diffusion of tacit knowledge whilst continuing to capture, categorise and disseminate the kind of explicit knowledge that is built upon it.

The Dangers of Painting Technology Trends with Too Broad a Brush

When it comes to the debate on the relationship between Social Media and Knowledge Management there is an article that though quite old now (Sept 2008) is still very often cited as authoritative – “Social Media vs. Knowledge Management: A Generational War” by Venkatesh Rao. Whilst seductive on first read, this piece demonstrates for me the danger of using too broad a brush to trace the outline of unfolding trends in technology.

Rao’s blog paints a picture of two generations – the Baby Boomers and Generation Y-ers – fighting it out with two different paradigms of how they relate to and handle knowledge – through top-down Knowledge Management on the one hand, and free-for-all Social Media on the other. According to Rao, in the middle of this are the Generation X-ers, involved in both and allied to neither.

There is certainly always an inertia involved in any discipline when new ways of doing things emerge – people work long and hard to become established in their field, and they will naturally try to fit what emerges into paradigms with which they are already familiar. It is also natural that people will cleave to ways of doing things that they grow up with first – this certainly does not mean that people are stuck in ways of doing things dictated by their age – there are enthusiastic older users of social media just as much as there are younger people seeking to impose order and taxonomies on their knowledge.

Rao’s post is part and parcel of a wider mindset that views the unfolding of technology and knowledge-exchange in simplistic terms, uncomplicated by culture, race, class, education, profession, personality and so on. The whole notion of the Baby Boomer, for instance, is one that is located in a specifically western developmental paradigm – the use of computers and the web has followed a distinctive trajectory in other countries and continues to unfold with a specifically local flavour depending on the environment – familiar western demographics are not universal. If the sentiments in this blog reflected reality, marketing would be simpler – we could just pursue everyone simply based on generation. Wherever technology and techniques are adopted, they become embedded in and reshaped by the local cultural and social environment.

The history of the internet as much as any other unfolding of events, past or present, is not so easy to characterise or periodise – people are complex, their mass behaviour is often chaotic – otherwise, why would we need to get a grip on knowledge in the first place? Whilst it is perhaps easier to divine the likely future behaviour of corporates, where the profit imperative is clearly driving things, for everyone else there is no single dimension – age, race, class – which can be used to group and characterise behaviours. One of the joys of the net is that the way people adopt and use particular sites and technologies is unpredictable in the extreme – which is why we all necessarily indulge in some level of futurology.

Design, Technology and Metaphor

I find it useful to think about most creative activities as being centred around the notion of metaphor. In advertising, for instance, a very common metaphor is the equation of success in life, and most particularly success with the opposite sex, with the purchase of the appropriate product. Indeed, branding is all about the metaphorical association of the abstract brand with a desirable character trait, such as Coke = happiness, or Guiness = invention. Brand managers strive to ensure that this metaphor is implemented coherently and consistently across all points of contact between the consumer and the brand – from advertising and promotion, through purchase, the actual moment of consumption itself and how consumers describe the brand thereafter.

Just like above the line activity, interface design and user experience, on the web and elsewhere, is all about constructing and maintaining a consistent metaphor. The most obvious example is the common computer Graphical User Interface (GUI) itself – the desktop, borrowing as it does some elements of the now-antiquated real world office – such as folders, and trash can. Just as you can change your mind about stuff you have thrown away, provided the office cleaner hasn’t emptied the trash, you can change your mind and retrieve items you have thrown away on the computer desktop. In constructing such a metaphor, not all aspects of the office are implemented in the interface – we don’t get a choice of chairs, for instance – though our virtual desktops do get messy, and we frequently put pictures of loved-ones on them. The creation and maintenance of interface metaphors is usually considered to be a design discipline. For successful interfaces, however, design and technology must work seamlessly together – which is often quite tricky to achieve.

Historically there has been quite a big divide between the disciplines of programming and design. This goes back to the days when programming computers involved the laborious writing of lines and lines of code, and using software involved entering some mystic incantations from the keyboard and viewing the resulting lines of text – neither mouse nor pointer to guide the way. In some ways, the creation of code still has something in common with those early days – particularly the focus on the linguistic and algorithmic aspects of code – aspects that can’t be captured easily in a graphical environment. Whilst design aims to look at things from the user’s point of view, the abstract linguistic orientation of code keeps some developers at arm’s length from user’s experience of the interface, whilst alienating many designers from the undoubted benefits of the deep understanding of interaction that can be derived from understanding code. Indeed on of my own preoccupations, both personally and professionally, is overcoming the synthetic divide between design and technology – which was one of the driving forces behind Lime Media being set up in the first place. At Lime we insist that our developers always keep their eye on the experience of using the system, whilst we expect our designers to understand the technical framework being used to implement their designs.

Given the way that software is often purchased these days – online, or on a mobile, at low-cost, for immediate use – it is unreasonable to expect user to delve into the recesses of a help system. When was the last time you used a user-manual, even when one was provided? If you are like me, probably a very, very long time ago. In most cases, except when using specialist or proprietary systems, user do have it relatively easy, finding themselves able to use new interfaces on first exposure, with little or no instruction. Why do they find it so easy – if the agency has done their job it is either because the interface has been designed in accordance with accepted or de-facto norms of behaviour, or, if it the design is quite innovative, they will have clearly indicated the metaphor being set up through visual and interactional cues.

When you innovate, you must provide clear sign-posts about the metaphor you are using. Here is where the bridge can be built most effectively between design and programming. User interface design is all about the visual cues for use, as well as the seamless progression of related actions involved in undertaking common tasks. If done well, the user will hardly notice it has been designed at all. The code implementation, however, is vital to support the consistency of the metaphor – it provides the depth and the behaviour over time that leads the user to trust that the GUI will not suddenly do something unexpected, that it will behave as advertised by the visual cues, that it will intelligently reuse information provided and not, for instance, ask you more than once for your name and address – unless doing a security check.

For the user, the experience of an interface can’t be broken down into component parts. The smoothness of movement on the interface needs to be matched by the information handling and vice-versa. If everything is right, then they will only be thinking about what they are doing – finding information, making purchasing decisions, amending information – instead of the interface. As soon as the metaphor of the interface breaks down – through inconsistency or unexpected behaviours – that is when the interface design, or lack thereof, becomes apparent.

Just like art, design and technology are ways that we reshape the way that we engage with the world – rearranging what is important and how we take our bearings. Art often likes to do this through metaphors that jar and provoke, through drawing the viewer into a world in which confusion and shock are the order of the day. By contrast, interface design is all about making the user feel at home – with design and technology taking a low-key role, keeping users focussed on what they are doing.

In order to create an engaging user-experience, and hence make systems useable and useful,  design and technology must work together to create consistent, coherent and intuitive metaphors for engaging and guiding users in what they really want to do.

The Semantic Web – The Next Big Thing?

I have just started delving into the principles and practices involved in implementing semantic web technologies. The so-called semantic web is periodically touted as the next big thing, the future of the web, or more pithily “web 3.0”. The idea is being vigorously championed by the “inventor” of the web – Tim Berners-Lee, as well as by a major working group of the W3C – the group responsible for laying down the standards for the web.

To put it in a nutshell, the idea of the semantic web is to provide information in a way that allows intelligent applications to use and combine information in order to infer or derive other useful information. Just like web 2.0, the semantic web names both a trend and a set of technologies. Where web 2.0 involved the notion of social networking and a set of technologies – AJAX – for providing rich integrated user experience, the semantic web names both the trend towards providing richly structured meta data on the web, and the languages, such as OWL, that will be used for implementation.

As a trend the semantic web marks a shift of focus away from offering visually rich content and towards offering information in a format that will allow it to be used not just by web browsers as visual content, but by a whole range of applications as data. In particular, it is hoped that such data will be used intelligently to infer trends and patterns of behaviour through deep analysis, often called “data mining”.

In order to make such content reusable, web semanticists create what they call “ontologies” – structured data representations of things in dedicated modelling languages. To be effective, the modelling languages for creating ontologies need to be both well structured enough to allow for information to be processed and combined, whilst also being flexible enough to capture an appropriately granular level of significant and useful detail.

The move away towards the sharing of knowledge has, of course, been going on for a long time. Basic html, the markup language of the earlier web, which was used to control layout as much as anything else, has been largely superceded by xhtml, which is used with more of an eye to the structure of the information being presented. Many organisations  explicitly share content through publishing RSS news feeds or web-enabled data services. Despite this, the current level of reuse of data is still quite modest.

One of the doubts hanging over the semantic web is its reliance on a very simplified model of language, called propositional language, presented as subject-verb-object triples, such as “Jon is tall” or “Jon speaks English”. Only in such a simplified form is it possible to combine assertions in anything like a simple fashion, using syllogisms.

An example of a syllogism is:

A) Socrates is Athenian
B) Athenians speak Greek
C) Socrates speaks Greek

The problem is that only a very tiny portion of useful language can be expressed like this. Most of what we understand is provided by an understanding of context. Take for example the following three statements:

A) The child is safe
B) The beach is safe
C) The car is safe

In these three cases it is clear that “is safe” means something significantly different. We understand the differences because we know practically what one does on a beach, or with a car, and what it means to keep a child safe. Without such contextual cultural knowledge, untangling these distinctions would be impossible.

Provided we can all agree about the context of use and how we are to interpret a given set of propositional triples, it is entirely possible to reduce a given area of useful language to this form. But a good deal of work is required in order to get the language into this form – and for a business this means expensive consultancy on agreeing frameworks for sharing meaning and information. It may be possible for limited uses frameworks to be developed for specific common scenarios, so that individuals can join in, but again, there will be a certain amount of work involved that will not be worth it for most sites.

So, we come to a basic case of cost benefit analysis – what will you gain from making your information available on the semantic web, and what will it cost you to choose or create the appropriate structures? How long will it take to sanitize your existing information, or prepare knowledge workers to label new information correctly? For anyone who has ever been involved in a major data  migration, these issues are very far from trivial – so I can’t help thinking that as usual, cost will be the driver.

In areas where information has intrinsic value – data rights management, or the payment of royalties for broadcast material, for example – it will make sense to use semantic web approaches – and here we are likely to see comparatively rapid updated. In most cases, the cost of implementing the semantic web is likely to be prohibitive – so the next big thing, I would suggest, will only be big in a small way.

%d bloggers like this: