xebia/nimbus

Name: nimbus

Owner: Xebia BV

Description: An Akka (HTTP) driven Google Cloud Datastore Client

Created: 2017-07-08 15:32:18.0

Updated: 2018-02-12 01:35:46.0

Pushed: 2017-07-11 20:50:14.0

Homepage: null

Size: 102

Language: Scala

GitHub Committers

UserMost Recent Commit# Commits

Other Committers

UserEmailMost Recent Commit# Commits

README

Nimbus

Nimbus is a Akka HTTP powered client for Google Datastore. It uses the connectionPool implementation of Akka HTTP to ensure optimal running performance with a small footprint.

The client consists of two seperate layers:

  1. A raw layer which is as less opinionated as possible, translating the REST specs and its objects into a pluggable, stackable and type-safe solution for communication with Google Datastore. All traits delivering this functionality can be found in the RawClient class.
  2. A opinionated layer which abstracts models and API calls into a more friendly and usable whole. It is within this layer where most development will be done to ensure a developer-friendly / batteries included solution for communication with Google Datastore.
Current state

In its current state, Nimbus should be treated as Alpha software. The client isn't feature complete yet, the structure and DSL can change and heavy testing in production is still to be done. However, in its basis and in context of the technology powering the clien; most parts of the client should be stable for test usage

Currently available
Currently missing / soon to be added
Usage

The RawClient can be initialized by a projectId and a Credentials instance. The Credentials class consists of a email address and a private key. These can be constructed any way the user desires, though it's easiest to construct these through the functionality available within the OAuthApi object:

readCredentialsFromFile(file: File): Credentials

readCredentialsFromEnvironment(): Credentials 

When the readCredentialsFromEnvironment() method is used, the credentials (note: not the PK12 but the json variant) will be read from the file defined in the GOOGLE_APPLICATION_CREDENTIALS environment variable.

Using these credentials, the RawClient can be initialized:

lient(readCredentialsFromEnvironment(), "your_project_id")
Raw functionality

Upon initialization, the RawClient will automatically generate its connection pool and authentication layer, and calls are ready to be done towards Google Datastore:

entities = List(
RawEntity(Key.named(client.projectId, "$TestObject", "Dog" + randomPostfix), Map("feet" -> Value(IntegerValue(4)), "color" -> Value(StringValue("Brown")))),
RawEntity(Key.named(client.projectId, "$TestObject", "Cat" + randomPostfix), Map("feet" -> Value(IntegerValue(4)), "color" -> Value(StringValue("Black"))))


mutations = entities.map(Insert.apply)
keys = entities.map(_.key)

{
ansactionId <- client.beginTransaction()
<- client.commit(Some(transactionId), mutations, CommitMode.Transactional)
okup <- client.lookup(ExplicitConsistency(ReadConsistency.Eventual), keys)
eld lookup

For the coverage of the rest of the raw functionality, it's best to check the test suite.

DSL

The opinionated layer can be initialized by either passing along the namespace of your objects and a already initialized client (when you want to recycle a client over multiple namespaces):

nimbus = Nimbus(namespace, client)

Or by passing the credentials directly:

nimbus = Nimbus(credentials, projectId, namespace)

As additional parameters, both a OverflowStrategy can be supplied as a manner of back-pressure strategy and a maximumRequestsInFlight parameter which states how many requests can be unhandled until the back-pressure strategy is used. Per default, a OverflowStrategy.backpressure strategy is used, combined with a max-in-flight of 1024.

Consistency levels

All procedures done to the Google Datastore can be either done using a Transaction Id or by setting the consistency level to either Eventual or Strong. For each of the functions described below (and all other available in DSL), a counter part for each of these levels is to be found. The short-hand functions default to an eventual consistency level.

Entities and paths

The created DSL / client is able to write and read objects which have a EntityConverter[A] type class implemented, or can directly use the Entity class as a pass-through.

The Entity:

l case class Entity(path: Path, properties: Map[String, Value])

Is a class which has a path and a set of properties. The path is a abstraction over the default Key structure available within Google Datastore, and uses the defined namespace within the Nimbus client / DSL to ensure easier creation and handling of these keys. In the properties, the actual value is contained which is eventually stored into Google Datastore. The set of available types in Google Datastore is rich enough to translate most data classes within applications and implicits are available to transform the basic Scala types to and back from Google Datastore Values:

rt com.xebia.nimbus.Path._
rt com.xebia.nimbus.datastore.model.Value._

 class Person(name: String, age: Int)

icit val personEntityFormatter = new EntityConverter[Person] {
override def write(p: Person): Entity = Entity('Person, p.name, Map("name" -> p.name, "age" -> p.age))

override def read(entity: Entity): Person = Person(entity.properties("name").as[String], entity.properties("age").as[Int])

Paths are automatically transformed to keys and can be nested to create tree / directory like structures:

rson -> "Bob") / ('Children -> "Mike")
count -> 577321) / 'Transaction

Every path consists of the kind of an entity on the left side and the name (String) or id (long) on the right side. When objects are stored which only define a kind but not a name or id, a identifier is generated automatically by Google Datastore.

CRUD

Using either a serializable case class (one for which a formatter is defined as above), or direct usage of a Entity, objects can be inserted, upserted, updated and deleted into and from the database:

mike = Person("Mike", 8)
nikky = Person("Nikky", 12)
bob = Person("Bob", 48)

bus basic DSL" should {
rectly store objects" in {
for {
    _ <- nimbus.insert(Seq(mike, nikky, bob))
    _ <- nimbus.delete('Person -> bob.name)
    _ <- nimbus.update(Seq(mike, nikky))
    _ <- nimbus.upsert(Entity('Person, "Bob", Map("name" -> "Bob", "age" -> 48)))
} yield {}

Lookup

Stored items can be looked-up using the look-up API:

{
m <- nimbus.lookup[Person]('Person -> mike.name)
b <- nimbus.lookup[Entity]('Person -> "Bob")
eld {
m.get.age shouldBe 8
b.get.properties("age") shouldBe 48

Querying

Besides the look-up functionality, more extensive querying can be done using the query API:

rt com.xebia.nimbus.Query._
{
_ <- nimbus.upsert(Seq(mike, nikky, bob))
q <- nimbus.query[Person](Q.kindOf('Person).filterBy('age > 6))
q2 <- nimbus.query[Person](Q.kindOf('Person).filterBy('age > 6 and 'age < 20))
q3 <- nimbus.query[Person](Q.kindOf('Person).filterBy('age > 6 and 'age < 20 and 'age > 10))
q4 <- nimbus.querySource[Person](Q.kindOf('Person).filterBy('age > 6)).runWith(Sink.seq)
eld {
q.results should contain theSameElementsAs Seq(mike, nikky, bob)
q2.results should contain theSameElementsAs Seq(mike, nikky)
q3.results should contain theSameElementsAs Seq(nikky)

The query DSL exposes multiple functions which are used to build a query:

kindOf(kind: Symbol): QueryDSL

orderAscBy(field: Symbol): QueryDSL

orderDescBy(field: Symbol): QueryDSL

filterBy(filter: Filter): QueryDSL

projectOn(fields: Symbol*): QueryDSL

startFrom(cursor: String): QueryDSL

endAt(cursor: String): QueryDSL

withOffset(offset: Int): QueryDSL

withLimit(limit: Int): QueryDSL
Running tests

The test suite expects that the Google Cloud Datastore emulator is running on port 8080, the following command can be run to start the emulator (the Google Cloud tools should be installed):

ud beta emulators datastore start --host-port localhost:8080 --consistency 1.0 --project nimbus-test --data-dir project-test

This work is supported by the National Institutes of Health's National Center for Advancing Translational Sciences, Grant Number U24TR002306. This work is solely the responsibility of the creators and does not necessarily represent the official views of the National Institutes of Health.