So, in the first part we looked at the general basics of tempography, in the second part we looked at a number of attacks related to and based on tempography, and today we will try to talk about the prospects and positive opportunities.

Disclaimer: This text was contributed by co-founder of DAO Synergis. The views, thoughts, and opinions expressed in the article are the author's own and do not reflect the opinion of the editorial.

AI and tempography

Perhaps that's where the tempo started. For me.

Surely you've heard the claims that Artificial Intelligence is dangerous? Will destroy us all? And other stuff. "Terminator", "I Am Robot", even "Excellence" and a host of other films have been musing about this idea for decades.

But what in reality?

Only a strictly centralised AI is actually dangerous: it really not only can, but has more than once gotten out of hand. Remember Microsoft's cursing "madam"?

AI in an open and decentralised system that respects the right to privacy, confidentiality and anonymity is another matter. Here, everyone operates in an untrusted environment, and therefore AI is one of many equal actors. No more than that.

What about tempography? It has a lot to do with it.

The simplest is the temporal capsules. If we need a neural network, but we are afraid that we will bring it up wrong, it would be correct to create it in a test tube first, and for that:

  • We need to make the local time of such a capsule/tube not similar to the global time (the simplest option is inversion);
  • Teach the network to live inside such a pseudo-network;
  • Disable it at any time without fear of transition to the outside world.

This is still morally debatable if we advocate equality between human and artificial intelligence, but it's better than putting billions of lives at stake for the sake of one.

There's a paradox: a neural network is self-learning, right, and so no one guarantees that it won't learn to do a reverse inversion of the time stream? Quite right. So such a system must be inside a decentralized and/or distributed and tokenized repository, supported by the work that the neural network does, but whose participants (validators, say) must also keep track of information coming from - outside, risking not just their stacking collateral, but their reputations as well.

But that is only one possibility. Consider three more.

Tempography and pending transactions

This is probably one of the first features to be born of blockchain:

  • We take and form a transaction (including - off-line);
  • We specify the block height at which it will be completed;
  • We get a time-protected safe.

Seemingly simple, but so far few have understood its benefits: everyone is busy with MEV bots and front-running attacks.

In the meantime - it's a terrific solution for:

  • Inheritance transfer;
  • Keeping funds hidden until the right event;
  • For many other cases.

That said, you could complicate things and make a script that interacts with a smart contract that looks for conditions to be met (e.g. sending a transaction from a specific wallet that was committed in block #X earlier, but strictly after reaching the right block height it will start doing so), which would be tied to both the past and the future.

It seems that these Web 3.0 possibilities are as inexhaustible as the thermonuclear energy of the Suns of Space!

But perhaps I am too happy to express my simple thoughts. Well, I'll try to come at it from the other side.

Temporal anomalies are normal

I already talked about this in the first part, but here I will go into a little more detail. The point is that different consensus systems, and the add-ons to them, produce different timing anomalies: whether it's a possible acceptable "lag" in full Bitcoin nodes (by 2 hours), PoH algorithm downtime in Solana, or Tendermint-like systems without an oracle (conditional) in the form of the IBC.

So?

So, these anomalies are nothing but reference and breakpoints at the same time: if we imagine the Web 3.0 ecosystem as a kind of Global Distributed Computer (or rather computers) and a kind of decentralized super-OS, it turns out that such anomalies help us to disassemble whole layers of open data.

Strange as it may sound, that's exactly what it is.

It's just not everyone needs it yet. But it 100% works. And it will work, especially since we need onchain indicators to distinguish one chain (say, where a BTC coin is) from another (say, from BCH), but for now it's offchain data in the form of ID-chain.

Or here's another example…

Anti-MEV bots

Strange if in 2022 you still don't know what MEV-bots are, in any case - you can see the gist at the link. But one question will remain open for sure: "how to be?"

In the sense that if ordinary participants have any chance against those who optimise the profit-making process by any means? At first glance, the answer is obvious: get into the system. But it is not available to everyone: financially, technically and otherwise.

That's why, say, sniper bots appeared, catching trading pairs on AMMs, and then there are scripts that try to fight them.

But tempography is just the right and broader answer to such challenges at the same time: we can localise the timing of any DeFi decision (in essence it would not be a state channel/rollup, but a temporal channel) to balance, if necessary, the conditions for each.

Something similar we have in mining, where protection from ASICs is created, or in stacking, where different degrees of delegation are provided.

Conclusions

Tempography is far from being as simple as it may seem to some, but it has all the best of Web 3.0: innovation, architectural beauty, the struggle for anonymity and decentralisation in open systems, connecting the world of blockchain and DAG-solutions with IoT/AI and other industries.

So I am sure that the future in this direction will definitely show itself and very soon: while we have seen the negative aspects of tempo (PoH-protections, cheating through flash loans, MEV-competitions with unfair conditions and so on), there are plenty of positive beginnings here as well.