Extended Principles for Human Information Interaction Design_23S2
pdf
keyboard_arrow_up
School
Queensland University of Technology *
*We aren’t endorsed by this school
Course
623
Subject
Communications
Date
Nov 24, 2024
Type
Pages
5
Uploaded by JudgeZebraMaster583
Principles for Human Information Interaction Design
Category
Design Principle
Brief Description
Examples (using a simple assignment bot)
Informing Theory
Human
sensitivity
Build rapport through
interaction tone
When in conversations with other people, we
generally do not speak as if we are giving a
lecture or talking to a brick wall. We want to
connect with the person we are speaking
with so that they understand and (often)
agree with what we are saying. We do this
by showing that we understand them and are
interested in what they are saying. Doing this
through the tone of an interaction means
steering away from short 1 word responses
or robotic-sounding speech.
If a user asks when an assignment is due, to build rapport, you would have
your bot say something like, “that assignment is due on DD/MM/YYY, it’s
the reflective journal one, did you want to know what it involves?”, rather
than just “DD/MM/YYY”.
To increase the rapport building, you could also add some creative phrases
of your own that you might notice people saying in their natural
conversations, such as “that’s a great question”, or “it must be stressful
having all of these assignments”, etc.
Bennett (2018),
Oinas-Kukkonen &
Harjumaa (2009)
Conform to social norms
This will depend on who your target user is.
There will be different cultural and social
norms that apply depending on who your bot
is designed to converse with.
If your bot is designed to talk on behalf of a government organisation to the
general public, you will need to build certain manners into its speech. For
example, more formal, respectful language rather than colloquial, as the
latter may be considered to be rude based on the norms associated with
social interactions with government representatives. If, instead, your bot is
designed to talk to young uni students, a more colloquial, fun tone could be
adopted, as a very formal way of speaking may appear to be distant and
unfriendly due to the norms that that social group is used to.
Bennett (2018),
Oinas-Kukkonen &
Harjumaa (2009),
Luger & Sellen
(2016)
Adapt to cognitive factors
One way to describe cognition is how our
brains process information. You need to be
mindful of the nuances and shortcomings of
the human information processor when
presenting information to your users.
You can think about cognitive biases,
cognitive load and other restraints when
designing your bot.
For example, humans are not great at handling large amounts of
information at once, so scaffolding information in a logical way is a good
way to design an interaction (i.e. think about the information you retain
when engaging in a discussion about a topic compared to listening to a
one-sided lecture about the same content)
Lieberman (1997),
Gnewuch et al
(2017)
Personalise the
information
Build into your design an ability to customise
the information it provides based on what
your bot knows/learns about the user.
In simpler systems, this could be done by
deepening your understanding of your target
user and customising the information to
provide the best experience for them
specifically. In more complex systems, this
could mean asking questions or logging
behaviour of a more general set of users to
determine the best interactions for them.
Let’s say you have a user who is very disorganised and never has a good
grasp on what they have to do and when – and may be highly stressed
because of this. (You determine this either by targeting them as a user type
or ascertaining this by their previous interactions with the bot)
You would then build in frequent reminders of due dates and assignment
criteria into your bot, at intervals in the conversation where this extra
information might be appropriate. You would also build your assignment bot
to speak to this user in more calming tones, including reassurances and
encouragements that they will be able to get everything done. These types
of solutions would not be appropriate for a different type of user who is
generally on top of everything they need to do and just needs information
Oinas-Kukkonen &
Harjumaa (2009)
quickly, as the extra encouragement and reminders would probably just
annoy these users. In this way, it is personalised to a specific type of user.
Utilise human expertise
Despite the rise of Artificial Intelligence, your
users are always going to be smarter overall
than your chatbot. If you assume your users
have little capability of knowledge, then your
bot won’t work very well. Acknowledge that
your users have at least some knowledge –
either directly or indirectly related to your
topic. You can then leverage that in the
conversation to allow them to participate
more with your bot rather than assuming
your bot will be providing all the information
in the interaction.
If a user is asking about the due date for a reflection assignment, a bot can
ask them – “did you already know what this assignment is about or did you
want me to go into some detail?”
It could also ask them: “Which assignment do you think is most important,
and I’ll prioritise reminders for that one”
Lieberman (1997),
Gnewuch et al
(2017)
Communication
fluency
Minimise information
complexity
If your topic involves complex information, try
to walk your users through it by breaking it
down into less complex parts.
If a user asks in week 2 when the next assignment is due, the bot might
start explaining that there is a formative part of assignment 2 due in the
next couple of weeks, but it doesn’t contribute towards your grade but you
do have to submit it in time to get feedback…. That explanation could be a
little confusing in the context of a chatbot conversation.
Instead, what your bot could do is to tell the user that there is a formative
feedback part for assignment 1 due on DD/MM/YYYY and ask whether they
want to know what that formative part means for their marks. If they want to
know this, they can ask, if they want to skip over this, they have the choice
too. This not only enables your user to have control over the information
that they receive, but also breaks the information into less complex parts.
Oinas-Kukkonen &
Harjumaa (2009),
Luger & Sellen
(2016)
Maximise interaction
predictability
One of the main components of usability is
predictability. We often run on ‘autopilot’, and
if everything acts as we expect it to, we will
be happy and won’t have to do much
“thinking”.
The best way to make an interaction with a
chatbot predictable is to ensure it conforms
to our expectations about how conversations
usually go. Our expectations of how
conversations usually go come from our
experiences – which are primarily with
humans. Therefore, to make an interaction
predictable – our bots must act like humans.
(note – acting like a human does not mean it
has to be deceptive about the fact that it is a
bot).
This is generally going to be achieved through thinking deeply about each
response your bot has and whether or not it feels naturally like a human
response.
For example, try to avoid things like “say “Assignment 1 Due” for the due
date of assignment 1”, or “click for more information”. These sorts of
phrases are not normally said in conversation and disrupt the flow of the
interaction.
In terms of more general predictability – think about consistency. If your bot
responds with general information and the due date for assignment 1 if a
user asks about assignment 1, but only responds with the due date if the
user asks about assignment 2, then this is not consistent. The user will not
be able to predict, for example, what type of response they will get if they
ask about assignment 3 if the other two responses are inconsistent.
Oinas-Kukkonen &
Harjumaa (2009)
Besides this, also ensure that your bot
operates in a predictable way.
Enhance information flow
The information flow is the way information is
presented to users. Just as the word “flow”
suggests, this should be seamless and easy.
An example of a poor information flow is if your bot requires your user to
answer an unreasonable number of irrelevant questions before it allows
them to ask about an assignment, or, at the moment they say hello, it spits
out all the information it has about assignments, or it might ask them to type
specific things in order to get to the information they need. All of these are
examples of how the bot is creating friction in the flow of the conversation,
and generally making it difficult for the user to access the information they
need.
What a good bot might do is have the ability to allow a user to jump straight
to the answer for their question, if they, for example, ask about the
assignment date straight away rather than just say hello. Your bot might
then ask them which assignment they need the due date for, allowing them
to answer in a natural way. It will then say the due date, and perhaps ask
them if they would also like information about what the assignment entails
or if they would like to know which assignment is due after that one. It won’t
push the user down specific pathways but rather, allow them to direct the
flow of the conversation in a natural way that aligns with how they generally
conduct conversations with other humans.
Bennett (2018),
Lieberman (1997)
Contextualise the
interaction
Every interaction occurs inside a context – a
set of circumstances that surround the
interaction. Being able to customise the
interaction based on each context is
important for providing a good experience.
If your user asks about the due date for assignment 1, then asks for the due
date for assignment 2, then asks for information about assignment 1, you
wouldn’t provide the due date again as it would be a waste of time (i.e., they
just heard it from you slightly earlier in the conversation).
If, however, at the start of the conversation, the user asks for information for
assignment 1 (rather than specifically the due date), you might provide the
due date with the general assignment information as they have not been
told it in the context of that interaction, and it might be convenient for them
to hear.
For a wider context, you might ensure that your bot is able to emphasise
the urgency of assignment information when the due date is close to the
date of the interaction. The context in this case, is the closeness of the due
date in time.
Bennett (2018),
Gnewuch et al
(2017)
Facilitate flexibility of
interaction
Even if you have a narrow target user group,
not all users will act the same. Allow for
individual differences by building some
flexibility into your interaction.
Not all users will want to access the same information in the same order as
other users. You should allow users to jump between intents without getting
completely locked into one chain of intents right from beginning to end.
For example, if a user wants to know the marking criteria for each
assignment, starting from assignment 2, then 3 then 1, then you should
allow for this. If your bot requires them to first ask for the due date for
assignment 1, for example, before it can tell them about the criteria for the
Gnewuch et al
(2017),
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
assignment, or anything about assignment 2 or 3, then this is inflexible and
will frustrate users.
Artificial agent
credibility
Create realistic
expectations of capability
Expectation management is useful in many
domains, including in conversations (both
human to human and human to bot). Part of
enhancing predictability (mentioned in the
above principle) is aligning with
expectations. There are some expectations
that you can’t control (e.g. social norms), but
can be aware of, and there are others that
you can manage yourself. The expectations
that your users have about the capability of
your bot is something that you can manage
yourself.
Your bot will only have a limited capability. For example, our imagined
assignment bot will only be able to talk about assignments in this unit.
Rather than present this bot as a go-to knowledge bot on all things about
this unit (creating the unrealistic expectation that users can ask it questions
about the details of workshops, classroom location, assignment results, etc.
and leading to frustration when it can’t answer these things), you need to be
up-front about what your bot can and can’t do. This way, users will know
what they can and can’t talk to your bot about and won’t get annoyed when
they encounter something your bot can’t respond to.
Luger & Sellen
(2016)
Provide sufficient
capability for information
task
Having said the above, you also need to
ensure that you are able to meet the basic
needs of your target users.
If your bot is only able to answer questions about one assignment and not
the rest, or only knows the due date for assignments 1 & 2, but the marking
criteria to all 3, then this is unlikely to be sufficient to meet your users’ goals
of knowing what they need to do for assignments and when they need to do
it.
Gnewuch et al
(2017)
Provide transparency and
accountability
Humans are generally quite good at being
able to tell whether they are being deceived
or lead on. Any technology you provide
needs to have a level of transparency and
accountability in order for users to trust it. Be
honest with your users, and allow your bot to
accept mistakes without trying to brush over
them.
For example, if your bot is unable to answer a question, or speak about a
topic, make sure it provides a reason. Something like “I’m sorry – I don’t
actually have the information about that”, rather than saying something like
“that’s a silly question” or “that’s off topic, try something else”. Being
accountable as the creator of your bot is also preferable, so you could add
something like “but that’s an interesting question – I will give that feedback
to my creator so that I can answer that question better in the future”
Oinas-Kukkonen &
Harjumaa (2009),
Luger & Sellen
(2016)
Build on user provided
information
Similarly to utilising human expertise, ensure
you integrate what the user says into the
interaction.
For example, if a user answers early in the conversation that they have an
extension for an assignment, if they later ask about the due date for that
assignment, the bot should integrate this earlier information into the due
date without having to ask for it again. When this integration is done, it
would also be wise to have your bot tell the user that this has happened in
case they did not expect the due date to be changed to automatically reflect
their extension, for example. This last part is an example of providing
transparency.
Oinas-Kukkonen &
Harjumaa (2009)
Guide the interaction to
fulfil information task
You are designing a conversation. You
therefore have some control over how that
conversation eventually plays out with your
users. Thinking about guiding the
conversation enables you to steer around
potential gaps in your bot’s capability, as well
as scaffold a meaningful, personalised
conversation.
If your target user group are students that are stressed about the unit and
are just hoping to pass rather than aiming for a 7, your bot could be
designed to focus only on the essential elements of the unit necessary to
pass. It might point out places such as the Tier 1 stage for assignment 1
that, if passed, guarantees a pass for the final version of the assignment.
Rather than simply waiting for the user to ask the right questions, the bot
could begin by suggesting some quick tips to pass the unit in some sort of
logical order (e.g. urgency or importance) and allowing the user to explore
each in more detail.
Oinas-Kukkonen &
Harjumaa (2009),
Luger & Sellen
(2016)
References
Bennett, G. A. (2018). Conversational Style: Beyond the Nuts and Bolts of Conversation. In
Studies in Conversational UX Design
(pp. 161-180). Springer, Cham.
https://link.springer.com/chapter/10.1007/978-3-319-95579-7_8
Gnewuch, U., Morana, S., & Maedche, A. (2017). Towards designing cooperative and social conversational agents for customer service. ICIS 2017.
http://aisel.aisnet.org/icis2017/HCI/Presentations/1
Lieberman, H. (1997, March). Autonomous interface agents. In CHI (Vol. 97, pp. 67-74).
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.30.8904&rep=rep1&type=pdf
Luger, E., & Sellen, A. (2016, May). Like having a really bad PA: the gulf between user expectation and experience of conversational agents. In
Proceedings of the 2016 CHI Conference on Human Factors in
Computing Systems
(pp. 5286-5297). ACM.
http://edithlaw.ca/cs889/2018/reading/Asking/Paper2.pdf
Oinas-Kukkonen, H., & Harjumaa, M. (2009). Persuasive systems design: Key issues, process model, and system features.
Communications of the Association for Information Systems
,
24
(1), 28.
https://aisel.aisnet.org/cgi/viewcontent.cgi?article=3424&context=cais