Technology
Tech Traps: Beware Style Over Substance - And Siloing - In Risk-Profiling
.jpg)
Greg Davies, head of Behavioural Finance at Oxford Risk, explains why wealth managers must resist the temptation to focus on user experience at the expense of a sound methodology in risk-profiling – and why these invaluable client data must not be allowed to fall through the cracks.
  “Stupidity well packaged can sound like wisdom,” wrote Burton
  Malkiel in A Random Walk Down Wall Street.  It is a
  lesson well-learnt, and well-used throughout financial services,
  albeit not always for well-meaning purposes.
  
  Common approaches to risk-profiling have quickly gone from
  nowhere to temptingly well-packaged triviality, but all too often
  have forgotten to stop and pick up a sound scientific methodology
  along the way.
  
  There are plenty of ways to do risk-profiling poorly. One of the
  latest is potentially the most dangerous, because on the surface,
  it looks like a great idea.
  
  Playtime is over
  “Gamification – increasing user engagement by improving the user
  experience, specifically by incorporating techniques from games –
  is rightly a mainstay of behaviour-change protocols. In a complex
  and, to many, mundane field like financial suitability, not using
  some gamification techniques feels like an unforgivable
  oversight.
  
  However, such temptation should signal caution. Gamification is
  great for engagement, but the techniques alone are not
  enough.
  
  At Oxford Risk, we embrace the techniques of gamification
  wherever we can – particularly in the design of user interfaces
  to enhance client engagement and experience. However, we never
  gamify at the expense of accuracy. The game is to enhance
  engagement, not sell snake oil. Gimmicky games trivialise risk
  tolerance, they do not test it.
  
  There is a time for simplifying, and a time for science. For
  example, some “tests” favour using the sort of individually
  intriguing but scientifically vacuous influence of a user’s past
  investment actions, or even a self-assessment of their risk
  tolerance. Users like this because it attaches a psychologically
  meaningful narrative to their past actions, but academics dislike
  it because it adds nothing, while taking away validity,
  integrity, and relevance – and it can end up “optimising” for
  precisely the behaviours we want to guard against.
  
  Form should follow function, not replace it; if you are not
  measuring what you’re supposed to be measuring, the playfulness
  of your polish doesn’t matter
   
  Pretty vacant
  A focus on a stylish front-end at the expense of the sort of
  scientific robust substance on which any psychometric assessment
  must be grounded creates a Potemkin village of a process – great
  for show, but ultimately not fit for purpose. Capturing clicks is
  no use without capturing valuable, usable, client insights.
  
  Einstein’s famous (though possibly misattributed) entreaty to
  make everything “as simple as possible, but no simpler” applies
  here. When technology tackles complexity, it tends to err on one
  side or the other: either technically optimal solutions with no
  care for user experience, or solutions simplified so far that
  anyone can use them, but where no one learns anything useful from
  doing so.
  
  A sufficiently optimal solution sits in a sweet-spot that has a
  deep understanding of both the textbook solution and the
  behavioural traits and tendencies of its users.
  
  Customer understanding is crucial. But helping clients navigate
  complexity is better than pretending that it can be
  cost-effectively avoided. The real returns from an understanding
  of the customer are preferable to an artificial understanding by
  the customer.
  
  Being able to trust the outputs of a profiling process means
  being able to trust both the user’s inputs (of which their
  engagement with and understanding of what they’re doing are
  elements) and the methodologies that underpin the design of the
  assessment and its subsequent creation of the output. Trust needs
  to be earned with expertise, not masked with marketing.
  
  Behavioural science has a crucial role to play in each of these
  steps, in ensuring correct functioning, and displaying it in a
  form fit for easy consumption. But the science must come
  first.
  
  The quality of a psychometric test is a question of validity and
  reliability – that it measures what it claims to measure and that
  when inputs are consistent, so are outputs. Testing the tests
  requires a silent sophistication: complexity beneath the surface
  that is not necessarily evident on the surface.
  
  An effective question set is like a team, or an orchestra: more
  than a mere collection of individual parts, the correlations
  between them count too. Picking the best team requires trials to
  see which elements work best together
  
  Suitability shouldn’t stop at the start line
  The complexity investors need to navigate is a function of the
  number of moving parts involved. Because investment markets move
  around more than an investor’s relatively stable willingness to
  trade off the chance of bad outcomes for good ones (i.e. their
  risk tolerance), traditionally most attention is paid to risk
  tolerance and the wider suitability process at the outset.
  
  However, over the course of an investment journey, it is the
  moving human parts – a panoply of behavioural reactions – that
  are arguably more worthy of attention. Suitability is dynamic; it
  suffers when seen as a snapshot.
  
  Many aspects of technology are insufficiently creative because
  the client “profile” is considered as only an onboarding issue,
  segregated from the reporting and relationship management that
  influence investor-investment interactions through changing
  circumstances.
  
  Humans do not turn into robots when they start to own
  investments. Siloing risk tolerance into a bucket of onboarding
  chores leads to suitability and client-satisfaction risks, and
  lost opportunities in sales and engagement because of a rushed,
  and incomplete, approach to client attitudes. Just because a
  transient behaviour shouldn’t be baked into an investment
  solution, it shouldn’t be ignored in deciding how that solution
  should be presented and managed over time.
  
  The person who makes a plan is rarely the person a plan is made
  for, whether that’s the alert and inspired future gym-goer of New
  Year’s Eve turning into the tired and emotional duvet-hugger of
  New Year’s Day, or the calm investor sitting with an advisor for
  an hour turning into the confused one reading the news six weeks
  later.
  
  Simple, but not simplistic
  Too often, to borrow a phrase from historian Will Durant, “The
  fertility of simplicity defeats the activity of intelligence.”
  Engaging investors with the profiling process is vital, but if
  it’s done at the expense of competently measuring what you need
  to measure, then it’s both dumb and potentially dangerous.
  Regulatory risks rise as the seriousness of testing investor
  attitudes to risk falls.
  
  Forgetting what you’re trying to do and why, in favour of how
  you’re doing it, is a common error when designing shiny new
  technological toys.
  
  Humans and tech perform best when they play together. Managing
  moving financial and emotional parts benefits from blending human
  and technological qualities. Humans are good at some parts of the
  suitability process. Tech is good at others. They each have
  distinct, complementary, roles to play. Tech should be leveraged
  to help humans navigate complexity, not add another layer of it,
  or become an end in itself. As simple as possible, but no
  simpler; beware both the simplistic and the over-engineered.
  
  Technology offers the opportunity to produce rich and accurate
  financial-personality assessments at scale, that in turn can be
  built in to hyper-personalised approaches to engagement,
  communication, portfolio construction and reporting.
  
  Well-designed digital platforms deliver personalised,
  easy-to-use information to clients which is shaped by their
  behaviours. By taking the legwork out of the risk-profiling
  process, technology can save human energy for appreciating the
  ambiguity inherent in its interpretation.
  
  But it can do this only when being good is followed by looking
  good, not replaced by it.
  
  This forms part of this publication’s latest research report,
  Technology Traps Wealth Managers Must Avoid. Download
  your free copy by completing the form below.