Skip to main content

2 posts tagged with "graphql"

View All Tags

ยท 6 min read

Sometimes the full power of a database can take too long to get your teeth into. Instead, you might say:

I know how I want to talk to my database, GraphQL, can't I just describe my database in GraphQL?

To answer that question we created SimpleGQL!

The purpose of SimpleGQL is to provide frontend developers a backend that "just works" but can later be extended/interacted with the full suite of Zef capability.


A GraphQL example

As a real-world example, we are building a budgeting app called "Ikura", using Zef as the backend. Ikura is mainly a frontend app - to declare the database that Ikura will use, we want to write something like:

type User {
email: String!
name: String
dob: DateTime
transactions: [Transaction]

type Transaction {
user: User
categories: [Category]
amount: Int
date: DateTime

type Category {
transactions: [Transaction]
name: String
icon: String

From this, we want to be able to just say:

python -m zef.gql.simpelgql ikura.graphql ikura-database

and have a GraphQL server be created, with all kinds of endpoints to query and mutate the database, which is a Zef graph created and tagged with ikura-database. Note that we haven't specified any queries/mutations in the above example... and we want to keep it that way!

Building the server pipelineโ€‹

The server is made up three components:

  1. A parser (the Facebook reference implementation) to parse the ikura.graphql file.
  2. The SimpleGQL part: our code that talks to the Zef graph and generates various endpoints including filtering, sorting, authentication and hooks.
  3. The server itself, ariadne, that we feed the appropriate endpoint callbacks.

In this post I want to talk a little bit about how ZefOps made implementing some features of SimpleGQL a breeze.

๐Ÿ”Œ Generating endpoints ๐Ÿ”Œโ€‹

We have taken a leaf out of the book of Dgraph to guide our development of an API. This should mean anyone migrating from Dgraph will find it trivial to use a Zef graph instead. This also allows one to self-host a Dgraph-like GraphQL server.

In our API we provide the following queries for each type, where the word Type is substituted for the type name in the .graphql file:

  • getType: obtain a single instance
  • queryType: search and retrieve multiple instances with filtering, sorting and pagination.
  • aggregateType: pull out useful totals, averages, minima, maxima over what queryType would normally return.

Similarly, we provide some mutations:

  • addType: add a new instance (or update an existing one by ID)
  • updateType: update one or more instances matching a criteria to set/remove their fields
  • deleteType: delete one or more instances

๐Ÿ” Filtering ๐Ÿ”โ€‹

The code to generate the above endpoints is available on our GitHub repository. The part I want to show off now is that of the filtering, which makes good use of the lazy zefops. For example, we want a query:

query {
queryTransaction(filter: {
amount: {ge: 5, le: 10},
category: {size: {eq: 1}}
}) {

To return the transactions the user can see, which have an amount between 5 and 10, and are assigned to a single category. Internally, this filter structure is converted to a zefop that looks like:

filter_predicate =
| And[greater_than[5]][less_than[10]]]
| length
| equals[1]]

and this zefop can be applied as a simple predicate for a filter. Although the below is not exactly what happens, this is not far off from the process that a simple queryTransaction does:

g | now | all[ET.Transaction] | filter[filter_predicate] | collect

๐Ÿ”’ Authentication ๐Ÿ”’โ€‹

The Ikura database will store many different users' data and we want to ensure no user can peak at another user's private information.

The auth setup, describing the key (symmetric or asymmetric), and what HTTP headers to use to verify, are given as a special comment in the .graphql file. The actual auth checks themselves are included with a special @auth directive attached to each type, and are simple strings representing the Zef query. There are several auth possibilities, shown here as an example directive in the graphql file:

type SomeType
query: "..."
add: "..."
update: "..."
updatePost: "..."
delete: "..."
) {

Don't worry about the overload of options though! By default, if a updatePost check is not explicitly given, the checks will fallback to a update check if that is present, then a query check.

Note that this checking is performed not at the query level, but at the entity level. For example, it shouldn't matter how we arrive at a ET.User entity, whether it's from a getUser query or a getTransaction query asking for the corresponding user, the appropriate rights should be checked before allowing the query to proceed.

Finally, we have implemented an experimental "rollback" feature into Zef that allows us to run auth tests on the graph data in situ. What I mean by that is we can perform, for example, a "pre" and "post" auth check on an update mutation. The "pre" check can run a Zef query similar to:

z_transaction_before | Out[RT.User] | uid | equals[verified_user_uid]

which says "Only transactions that the connecting user owns can be modified", and then a follow-up "post" check:

z_transaction_after | Out[RT.User] | uid | equals[verified_user_uid]

effectively says "The transaction cannot be changed to point to a different user". Notice that these checks are exaclty the same, so if we leave off the input and just write it as a ZefOp we get:

Out[RT.User] | uid | equals[verified_user_uid]

In the actual auth check, we don't write verified_user_id, but use the verified JWT found in the query HTTP headers, made avaiablle via info.context['auth'].

Annotated graphql fileโ€‹

All of the above requires a few extra details to be added to the original ikura.graphql file. Here is what it looks like now:

# Zef.SchemaVersion: v1
# Zef.Authentication: {"Algo": "HS256", "VerificationKey": "...", "Audience": "", "Header": "X-Auth-Token"}

type User
add: "info.context | get_in[('auth', 'admin')][False]"
query: """
(z | Out[RT.Email] | value
| equals[info.context
| get_in[('auth', 'email')][None]
| collect])
@hook(onCreate: "userCreate")
email: String! @unique @search
name: String
dob: DateTime
transactions: [Transaction]
@relation(rt: "TransactionUser")

type Transaction
@auth(query: "auth_field('user', 'query')")
user: User @relation(rt: "TransactionUser")
categories: [Category] @relation(rt: "TransactionCategory")
amount: Int @search
date: DateTime @search

type Category {
transactions: [Transaction]
@relation(rt: "TransactionCategory")
name: String
icon: String
created: DateTime

You can also notice a few other directives, @unique, @search, @incoming, @relation... These are described in more detail in our documentation.

ยท 7 min read

This is last blog of the wordle blog series. Be sure to check part 1 and part 2 before reading this blog!

In this blog, we are going to adapt the code we wrote in part 1 to create the GraphQL backend to our game Worduel ๐Ÿ—ก

We will see how easy it is to dynamically generate a GraphQL backend using ZefGQL, run it using ZefFX, and deploy it using ZefHub.

In this blog, we won't implement all of the endpoints that are actually needed for Worduel to run, the full code, including the schema and all endpoints, is found in this Github repo.


Let's start building ๐Ÿ—โ€‹

So to get started we have to create an empty Zef graph

g = Graph()

After that we will use a tool of ZefGQL which takes a string (that contains a GraphQL schema) and a graph to parse and create all the required RAEs relations, atomic entities, and entities on the graph.

Parsing GraphQL Schemaโ€‹

The link to schema used for this project can be found here.

schema_gql: str = "...."                # A string contains compatible GraphQL schema
generate_graph_from_file(schema_gql, g) # Graph g will now contain a copy of the GraphQL schema
schema = gql_schema(g) # gql_schema returns the ZefRef to ET.GQL_Schema on graph g
types = gql_types_dict(schema) # Dict of the GQL types connected to the GQL schema

Adding Data Modelโ€‹

After that we will add our data model/schema to the graph. We use delegates to create the schema. Delegates don't add any data but can be seen as the blueprint of the data that exists or will exist on the graph.

Psst: Adding RAEs to our graph automatically create delegates, but in this case we want to create a schema before adding any actual data

delegate_of((ET.User, RT.Name, AET.String)),
delegate_of((ET.Duel, RT.Participant, ET.User)),
delegate_of((ET.Duel, RT.Game, ET.Game)),
delegate_of((ET.Game, RT.Creator, ET.User)),
delegate_of((ET.Game, RT.Player, ET.User)),
delegate_of((ET.Game, RT.Completed, AET.Bool)),
delegate_of((ET.Game, RT.Solution, AET.String)),
delegate_of((ET.Game, RT.Guess, AET.String)),
] | transact[g] | run # Transact the list of delegates on the graph

If we look at the list of delegates closely we can understand the data model for our game.


ZefGQL allows developers to resolve data by connecting a type/field on the schema to a resolver. You don't have to instantiate any objects or write heaps of code just to define your resolvers.

ZefGQL lifts all of this weight from your shoulders! It dynamically figures out how to resolve the connections between your GraphQL schema and your Data schema to answer questions.

ZefGQL Resolvers come in 4 different kinds with priority of resolving in this order:

Default Resolversโ€‹

It is a list of strings that contain the type names for which resolving should be the default policy i.e mapping the keys of a dict to the fields of a type. We define the default resolvers for types we know don't need any special traversal apart from accessing a key in a dict or a property of an object using getattr


default_list = ["CreateGameReturnType", "SubmitGuessReturnType", "Score"] | to_json | collect
(schema, RT.DefaultResolversList, default_list) | g | run

Delegate Resolversโ€‹

A way of connecting from a field of a ET.GQL_Type to the data delegate. Basically, telling the runtime how to walk on a specific relation by looking at the data schema.


duel_dict = {
"games": {"triple": (ET.Duel, RT.Game, ET.Game)},
"players": {"triple": (ET.Duel, RT.Participant, ET.User)},
connect_delegate_resolvers(g, types['GQL_Duel'], duel_dict)

You can view this as telling ZefGQL that for the subfield games for Duel type, the triple given is how you should traverse the ZefRef you will get in runtime.

Function Resolversโ€‹

We use function resolvers, when resolving isn't as simple as walking on the data schema. In our example, for our mutation make_guess we want to run through special logic. Other usages of function resolvers include when the field you are traversing isn't concrete but abstract. An example is a field that returns the aggregate times by running a calculation.


def user_duels(z: VT.ZefRef, g: VT.Graph, **defaults):
filter_days = 7
return z << L[RT.Participant] | filter[lambda d: now() - time(d >> L[RT.Game] | last | instantiated) < (now() - Time(f"{filter_days} days"))] | collect

user_dict = {
"duels": user_duels,
connect_zef_function_resolvers(g, types['GQL_User'], user_dict)

We are attaching the user's subfield duels to a function that traverse all of the user's duels but filters on the time of the last move on that duel to be less than 7 days old. We could have used a delegate resolver but we wouldn't be able to add the special filtering logic.

Fallback Resolversโ€‹

Fallback resolvers are used as a final resort when resolving a field. It also usually contains logic that can apply to multiple fields that can be resolved the same way. In the example below, we find a code snippet for resolving any id field.


fallback_resolvers = (
"""def fallback_resolvers(ot, ft, bt, rt, fn):
from zef import RT
from zef.ops import now, value, collect
if fn == "id" and now(ft) >> RT.Name | value | collect == "GQL_ID":
return ('''
if type(z) == dict: return z["id"]
else: return str(z | to_ezefref | uid | collect)''')
return "return None"
(schema, RT.FallbackResolvers, fallback_resolvers) | g | run

The returns of the function should be of type str as this logic will be pasted inside the generated resolvers.

The function signature might be a bit ugly and shows a lot of the implementation details. This part will definitly be improved as more cases come into light.

Running the Backend ๐Ÿƒ๐Ÿปโ€โ™‚๏ธโ€‹

The final API code, will contain a mix of the above resolvers for all the types and fields in the schema. After defining all of the resolvers, we can now test it locally using the ZefFX system.

"type": FX.GraphQL.StartServer,
"schema_root": gql_schema(g),
"port": 5010,
"open_browser": True,
}) | run

This will execute the effect which will start a web server that knows how to handle the incoming GQL requests. It will also open the browser with a GQL playground so that we can test our API.

It is literally as simple as that!

Deploying to prod ๐Ÿญโ€‹

To deploy your GraphQL backend, you have to sync your graph and tag it. This way you can run your API from a different process/server/environment because it is synced to ZefHub:

g | sync[True] | run               # Sync your graph to ZefHub
g | tag["worduelapi/prod"] | run # Tag your graph

Now you are able to pull the graph from ZefHub by using the tag.

g = Graph("worduelapi/prod")

Putting it all together, the necessary code to run your GraphQL backend looks like this:

from zef import *
from zef.ops import *
from zef.gql import *
from time import sleep
import os

worduel_tag = os.getenv('TAG', "worduel/main3")
if __name__ == "__main__":
g = Graph(worduel_tag)
make_primary(g, True) # To be able to perform mutations locally without needing to send merge requests
"type": FX.GraphQL.StartServer,
"schema_root": gql_schema(g),
"port": 5010,
"bind_address": "",
}) | run

while True: sleep(1)

As a side-note: In the future, ZefHub will allow you it remotely deploy your backend from your local environment by running the effect on ZefHub. i.e: my_graphql_effect | run[on_zefhub]

Wrap up ๐Ÿ”šโ€‹

Just like that, a dynamically-generated running GraphQL backend in no time!

This is the end of the Wordle/Worduel blog series. The code for this blog can be found here.