User:Infinity Linden/OGP Trust Model

From Second Life Wiki
Jump to: navigation, search

Computers should do exactly what you tell them, and no more.

John Steven (noted internet security expert)

Important Notes

  • this is a "work in progress." At some point in the near future, this page will be moved to a more "permanent" position.
  • we're trying to keep this page more or less "spec like." Discussion should go on in the discussion page.
  • there's a use cases sub page at User:Infinity_Linden/OGP_Trust_Model_UseCases
  • the trust model discussion here is a long-term project. Ideas for a near-term trust solution are outlined in OGP Trust - Phase 0.



The term "trust" in the domain of information systems is a collection of related concepts, many of which do not yield gracefully to analysis. Without delving into the depths of academic descriptions of "trusted systems" or "verifiable security," we can describe trust as a system "doing what it's supposed to do" while not "increasing the opportunity for exploitation by bad actors." This "trust model" is an initial effort to enumerate stake-holders in the system and describe their security objectives. "Trust," in this since is simply the expectation that the system will "do the right thing" and how we plan to verify remote system are behaving in a manner we expect.

We start with the User Stories section, introducing narratives describing situations legitimate system users may find themselves in and what the system should do to maintain "high confidence" that bad actors are not able to erode trust the system will either deny illicit manipulation or detect and allow recovery from it.

We continue by enumerating stake-holders in the system and briefly noting their requirements. We've listed End Users, Content Creators, Corporate IT, Agent and Region Domain Operators as major stake-holders. These are the people who have an interest in maintaining "trust".

Next we look at "security objectives." These are high level descriptions of behaviors we wish the system to express.

The Common Threats section attempts to catalog "bad behaviors" that should be identified and defended against.

Finally, the Implementation Notes section provides guidance on converting information in this document into practical, deployable systems.

User Stories

Specific use cases here -

Stakeholders and their Interests

End User

This is the traditional user of the system. They may be a casual user of Second Life or a corporate user, come to the grid to collaborate on "work" projects. In either case, their interests include:

  • credential integrity - "bad guys" shouldn't be able to steal their online identity
  • inventory integrity - the system should protect against inventory theft, loss, or usability problems
  • specie integrity - the system should protect against loss of Linden Dollars
  • privacy integrity - the system should allow user control of their presence and range of visibility of their personal data
  • responsibility integrity - the system should recognize certain individuals as being responsible for others.

Content Creator

These are users who derive an income stream from Second Life. In addition to interests of traditional End Users, Content Creators also have these interests:

  • content integrity - content creators want to know that content they create cannot be illicitly duplicated, modified, lost or stolen

Corporate IT and ISP Operations

These are the people who maintain networks connecting the client's machine to the network, and in the case of corporate IT operations. they likely manage the user's systems as well.

  • network security - no system component (client software, agent domain software, region domain software, third party web service) should decrease the general availability, reliability or security of the network
  • peer system security - no system component (client software, agent domain software, region domain software, third party web service) should increase the risk of successful attack versus other systems in the network on which they operate

Client Software

This is the actual software running on the client machine; usually a viewer, but could be a web application using standard published APIs into the Agent or Region domains.

  • system security - use of the Second Life viewer or other client software should not place the user's system at greater risk of successful attack
  • flexible peer authentication - the system should be flexible enough to support multiple legacy peer authentication schemes

Agent Domain Administrator or Region Domain Administrator

This is the organization that operates an agent and/or region domain.

  • peer authentication - the system should support strong authentication techniques to ensure the identity of peer systems
  • flexible agent authentication - the system should be flexible enough to support domain-specific user authentication
  • forward security - for the purpose of third party the system interoperability, the system should provide authentication tokens usable ONLY for the explicit purpose described

Agent Domain Software / Systems or Region Domain Software / Systems

This is the software that implements agent and/or region domain services.

  • flexible peer authentication - the system should be flexible enough to support multiple legacy peer authentication schemes

Third Party Web Service Operators

These are systems operated by third parties for the benefit of Second Life users, Agent or Region Domain operators.

  • limitation of sensitive data - the system should not REQUIRE third parties to handle sensitive information

Protocol Implications

Notes on the implications of the trust model for the design of the relevant parts of the Open Grid Protocol

Online vs. Offline Information

In considering how the OGP should satisfy the various requirements of the trust model, we need to consider what information must or should flow 'online' (i.e. as part of the communications specified by the protocol), and what must or should be set 'offline' (i.e. outside of the protocol, typically as a static change to the configuration or local data files of a domain server or client).

In establishing trust between an Agent Domain and a Region Domain, for instance, we can imagine two rather different styles:

  • Online trust establishment involves significant communication or negotiation via the protocol, at runtime. For instance, a Region Domain R might be statically configured to trust what a particular Application Domain A says about itself, and then when A contacts R to carry out some interop activity, the two would mutually authenticate, A would describe itself to R in some standard way, and R would then decide whether or not it ought to interact with A in the particular case. On the other hand,
  • Offline trust establishment assumes that most of the interoperation decisions are made in advance, outside the protocol, and the main function of the protocol in trust-establishment is simply authentication. For instance, a Region Domain R might be statically configured with a list of the kinds of interop activities it should allow with an application domain A (allowing A's residents to rez, allowing them to bring inventory of various types with them, accepting their profile information for display to other residents present in R, etc). Then when A contacts R to carry out some activity, the two would mutually authenticate, and (without any further protocol interaction on the subject) R would simply look up the activity in the list of things it's allowed to do with A in order to decide whether to proceed.

Online trust establishment allows for more flexible and automatic operation; on the other hand it is more complex, more error-prone, and requires more standardization and more protocol complexity. Offline trust establishment is more static, but also simpler at runtime and (probably?) less prone to error and exploitable bugs. OGP will probably have elements of both online and offline trust establishment. (Although we might want to start heavily offline, since that's easier? -- Dale Innis 13:38, 13 August 2008 (PDT))

Security Objectives

secret things stay secret (confidentiality)

you should know who you're talking to (origin integrity)

you should be able to spot message tampering (message integrity)

purchases or transfers should be properly recorded (non-repudiation)

accessing the grid shouldn't make your system or network unduly vulnerable (system integrity)

you should be able to spot atypical events in the logs (adjudicated forensic evidence)

the permission system should be suitably expressive (expressive permissions)

Trust "Layers"

System Layer

Network Layer

Application Layer

Political Layer

Common Threats

Denial of Service

Session Hijacking

Illicit Recovery of Credential Authenticators

Impersonating a Peer System

Disrupting an Inventory or Specie Transaction

Impersonating an Avatar

Hiding Evidence of Illicit Activities

Forcing the System to Leak Information

Implementation Notes

System Layer Trust

Storing Pass Phrases

While the enrollment of users and establishment of peer relationships is outside the scope of the Open Grid Protocol, a few notes on password security are warranted. Recent years have seen numerous successful attacks against various databases containing identifying information for service users. The presence of an "in-world economy" provides sufficient motivation for bad actors to attempt a network attack with the objective of taking control of user accounts. Once compromised, the attackers can impersonate legitimate users, even to the point of transferring inventory or specie to other accounts.

For this reason, the Open Grid Protocol does not require the transmission of passwords in the clear. Additionally, we strongly encourage entities who maintain "shared secret" authenticators on behalf of their users to investigate current best practice for securing shared secrets "at rest and in motion."

  • todo: add reference to shared secret best common practices here.

Network Layer Trust Between Systems in Different Administrative Domains

One of the most common situations in terms of trust will be two systems from different "administrative domains." The Open Grid Protocol assumes a world where actors from independent administrative domains establish "relationships" based on a general use agreement (such as a Terms of Service agreement.) Fundamental to such interactions and relationships is the concept of positive identity or at the very least pseudonymity. One actor (like a resident running a viewer) must have faith that their remote peer (like an administrative domain) is really the peer they think it is (positive identity) or at the very least, it's the same peer they talked to last week (pseudonymity). Such network layer trust is a necessary enabler for "higher level" trust objectives such as inventory integrity, fraud detection, etc.

Transport Layer Security

It is impractical to recommend Virtual Private Networks for such systems. The administrative problems of adding arbitrary users to a managed VPN are great, especially considering the purpose of a VPN is to create a "private network" for members of a single administrative domain (or related administrative domains.)

Instead, technologies like Transport Layer Security (the successor to SSL or Secure Sockets Layer) provide a good solution. The focus of TLS is to a) ensure the identity of the peer system and b) maintain the confidentiality of information moving between systems.

In TLS (and the related Datagram Transport Layer Security or DTLS,) identity of a server is established via presentation of an X.509 certificate. This certificate is used in a "chain of trust" that leads from the server to a well known and "trusted" third party (such as Verisign, Thawte, etc.) Certificates for these third parties are widely distributed and "easy" to verify.

X.509 certificates may also be used by clients to identify themselves to servers. Known as "client side certificates," this use of X.509 has seen relatively little use outside some high-security (and high administrative cost) applications. Instead, using a pass-phrase in conjunction with TLS to authenticate the client to the server after the server has been authenticated by the client is a viable (and considerably less expensive) alternative.

Therefore, in situations where two systems from two different administrative domains where the requirement is to positively identify a peer, the use of TLS for server authentication and password authentication for individual users is recommended.

  • todo: insert reference to TLS somewhere in here

IP White-Lists and Black-Lists

For systems that must authenticate each other in the absence of a user account, shared secrets or passwords can be difficult to administer. Some organizations have strict controls over the distribution of such shared secrets. REQUIRING their use may therefore impose an administrative burden to some organizations.

One "low overhead" technique for establishing the identity of peer systems is through the use of an "IP White List." A system's white list simply lists the IP addresses of peer systems it trusts. If a network request comes from a peer whose IP address is in the white list, the system processes the request. If not, the request is ignored. This has the effect of limiting interaction to those systems on the list. Conversely, a "Black List" lists the IP addresses of systems known to be hostile. Using a black list means giving access to everyone save those on the list.

IP Address based security is not without its problems, however. Some systems may operate with multiple or transient IP addresses. While this is not an insurmountable obstacle, it does tend to increase the size of black or while lists and increases the likelihood that an accidental omission will lead to a loss of service.

More seriously, techniques exist to allow malicious attackers to impersonate the IP address of some other system on both local and remote networks. ARP Poison Routing (APR) can be used to reroute traffic on a local network while attacks on the Border Gateway Protocol (BGP) can, in some circumstances, allow hijacking of IP addresses on remote networks.

  • todo: add reference to ARP poisoning attacks
  • todo: add reference to BGP attacks & RFC4272

Passwords, Pass-Phrases and Session Initiation

a few words on how to safely move shared secret based authenticators across a network with PKCS#5 and SRP go here

The Expression of Permissions

notes on (subject, object, permission) based permission systems and how it integrates into capabilities go here

Establishing Trust with a Peer Administrative Domain

notes on how and why you want to support 'out of band' authentication between people responsible for different sets of machines go here

At the end of the day, the purpose of the security features in the OGP protocol and in the Second Life Grid are there to keep sensitive data secret, prevent "cheating" where possible. If it's not possible to prevent cheating, we make it possible to detect cheating. And if it's impossible to detect cheating, we try to disable access.

However, this leads to a simple problem: "How do you know the operator of an online service is trustworthy?" There are no simple answers to this question, but there is general agreement that such answers are not based on technology but on good, old-fashioned, human "trust." In computer security terminology, systems are "trusted" because they must be trusted. If they can't be trusted, then the system can't work.

We run into this problem in the Second Life Grid; there is no way to detect certain types of bad behavior. While we use strong authentication techniques to ensure the identity of a peer system, and we use cryptographically strong algorithms to protect users' passwords and prevent "replay attacks", some questions have no technical solution.

For instance, there is no technical technique to ensure the operator of a region domain is not making illicit copies of your inventory when your avatar rezzes in regions they control.

Language Bindings for Occam

(See discussion here.)

See Also

External References

  • todo: add reference to TLS
  • todo: add reference to shared secret best common practices