For people curious about the Roam API and confused by the syntax, or interested in why Conor went with Datomic/Datascript and not a traditional database, this older talk by Roam developer @mark_bastian is a great overview.

He gives great examples using Spiderman of how even modeling something fairly trivial in SQL is much more complex than in Datomic. But the real kicker is when you're trying to interrogate the data to find recursive relationships.
Right now the Roam data model (at least that's exposed to developers) is just about pages, blocks, and children with tags. Already you can see how finding the page containing a block with a certain tag etc is useful.https://t.co/jWJnuKu1RG
But imagine when attribute relationships are fully represented

You should be able to model the entire Spiderman story in Roam.
Page title: Peter Parker
Child of:: [[Richard Parker]] [[Mary Parker]]
Aliases:: [[Spidey]]

etc, and do these kind of queries.
"Show me quotes about operational efficiency in books by authors who used to be in the military"
"Show me companies in Boise, Idaho, founded by women, whose evaluation is lower than 10X ARR"
Of course, this data can also be used as input to timelines, graphs, etc:

"Show me a graph of my sleep quality versus days in which I ate foods that had gluten in them or not" (where [[bread]] has a page with ingredients::).
One thing I'm curious about is how the Datalog system differs from Wikidata and SparQL, the modeling seems to be kind of similar - you have triplets of entities, like :Oslo :is-a-capital-if :Norway (where all three entities have an id), and you can do graph queries.
So you can ask "Largest city with female mayors", but you can also visualize data in all kinds of ways, like dimensions of elements, children of Genghis Khan, or lighthouses in Norway https://t.co/XvxmEVO9vB
What would it look like to integrate Wikidata with Roam in the future, being able to easily pull in and reference data about entities (cities, authors, scientific concepts)... And build our own Wikidata through inter-Roaming... As well as citations (https://t.co/aP7RSyaGl0) ...

More from Tech

THREAD: How is it possible to train a well-performing, advanced Computer Vision model 𝗼𝗻 𝘁𝗵𝗲 𝗖𝗣𝗨? 🤔

At the heart of this lies the most important technique in modern deep learning - transfer learning.

Let's analyze how it


2/ For starters, let's look at what a neural network (NN for short) does.

An NN is like a stack of pancakes, with computation flowing up when we make predictions.

How does it all work?


3/ We show an image to our model.

An image is a collection of pixels. Each pixel is just a bunch of numbers describing its color.

Here is what it might look like for a black and white image


4/ The picture goes into the layer at the bottom.

Each layer performs computation on the image, transforming it and passing it upwards.


5/ By the time the image reaches the uppermost layer, it has been transformed to the point that it now consists of two numbers only.

The outputs of a layer are called activations, and the outputs of the last layer have a special meaning... they are the predictions!

You May Also Like