Coinlive- We Make Blockchain Simpler
Download and install the Coinlive app
Open

​Dialogue with Redstone developers: Full-chain gaming and the revival of Plasma

Source: FunBlocks "In this special Devs on Devs, we invited tdot[2], the core protocol developer of Plasma Mode[1] (also the developer of Redstone[3]), and Ben Jones[5], the co-founder of Optimism[4]. Optimism is the core driver of OP Stack. Plasma Mode allows developers to build on OP Stack, but without publishing data to L1, they can flexibly switch to off-chain data providers to save costs and improve scalability. In this conversation, they discussed the origins of the Redstone and Optimism collaboration, the importance of reviving Plasma, the need to introduce experimental protocols into production environments, the future roadmap of Plasma Mode and OP Stack, and their excitement about the development of the full-chain gaming field." 01. How to use Plasma mode to improve OP Stack left;">Ben:What was the process like to start improving OP Stack?

tdot:I joined Lattice about a year ago to work on Plasma Mode. The goal was very clear: we had a lot of MUD[6] applications that consumed a lot of gas, and we were trying to put a lot of data on-chain, so we needed a solution that supported both of those needs and was cheap. The Lattice team had done some experiments with OP Stack, such as prototyping some on-chain worlds and deploying them on OP Stack. We found that OP Stack was already very good.

So we asked ourselves, "How can we make it cheaper?" The basic assumption was, "We think OP Stack is the most consistent framework with Ethereum's philosophy and fully compatible with the EVM." The ideal solution is that things that run on mainnet can also run on OP Stack. But we want it to be cheaper.

At the time, calldata was still the source of data availability (DA) for the OP Stack chain, which was very expensive. So we obviously couldn't launch an L2 with calldata because our full-chain game and MUD worlds required higher throughput. So we decided to start experimenting with alternative data availability (Alt DA) solutions. In fact, exploring Alt DA was already mentioned in the original OP Stack document.

So we asked ourselves, "What if we started with off-chain DA?" We wanted the entire security model and everything to rely on L1 Ethereum. So we avoided other Alt DA solutions and decided to store data in centralized DA storage and then find an effective security model on L1.

This is why we are reusing some old Plasma concepts and putting them on top of rollup. There are some differences here. The big question is, how can we achieve off-chain DA and on-chain data challenges on the existing OP Stack? Our goal was to make as few changes to the OP Stack as possible, with no impact on the rollup path, because we didn't want to impact the security of other rollup chains that use the OP Stack.

When designing a rollup, you don't think, "What happens if someone changes the data generation process to store data from somewhere else?" Even with these changes, the OP Stack is still very powerful and works very well out of the box. This is the first change we made.

After that, we need to write contracts to create these challenges. There are DA challenges that are used to force data to be on-chain. This is the second step, integrating the contracts into the process. We had to build this whole integrated system in the derivation process so that you can derive data from an off-chain DA source as well as an L1 DA challenge contract in case the data is submitted on-chain during the challenge resolution process.

That's the gist of it. It's complicated because we want to keep things elegant and robust. At the same time, it's a relatively simple concept. Rather than trying to reinvent everything or change the entire OP Stack, we tried to keep things simple in a complex environment. So overall it's been a pretty cool engineering journey.

Ben: I can talk about this from the OP perspective. You mentioned some of the early work on Lattice. It just so happened that at the same time, we at Optimism were doing an end-to-end rewrite of almost the entire OP Stack, which we called Bedrock.

Basically, after two years of building rollups, we took a step back and said, "Okay, if we were to take all of the lessons we'd learned and take them to the extreme, what would that look like?" That evolved into the codebase that ultimately became known as Bedrock, which is the biggest upgrade we've ever made to the network.

At that time, we were working with you on a project called OPCraft[7], which I think is the spiritual successor to that, Biomes[8], and that was the most fun we had on the chain. At the same time, we were relieved that other people could also build with the OP Stack. I think another big turning point in scaling over the last few years is that a lot of people can run chains.

Karl from Optimism observes OPCraft gameplay

It's not just the people who have developed a big, complex codebase that can do this. When we started working together, it was really affirming to see that other people were able to take that codebase and do something really amazing. And then to see that scale out to Plasma in real-world applications was really cool. I can even talk about that history a little bit.

Before Optimism became Optimism, we were actually working on a technology called Plasma. We took on a task that was far beyond the capacity of the scaling community at the time. The designs that you see in the early Plasma designs may not have a direct correspondence to the Plasma of today.

Today's Plasma is much simpler. We separated the proofs and challenges of state validation from the challenges of data. Ultimately, we realized a few years ago that Rollups were much simpler than Plasma. I think the community's conclusion at the time was "Plasma is dead." This was a meme in the history of Ethereum scaling at that time.

But we always said, "Plasma is not dead, it's just that we can try a simpler task first." Now we use different terminology. For example, there were concepts such as exits at the time, and now you can look back and say, "Oh, that was a data availability challenge with some extra steps." So it's been amazing to see not only the OP Stack being used by other people, but also evolving into what we originally tried to do but in a very messy and immature abstract way. We have come full circle and you have made great abstractions around them and made it work in a reasonable and sane way. It's really cool.

Coindesk coverage from when Plasma became Optimism

02. The most important thing is to get into production as soon as possible

tdot:There are still some challenges and unsolved problems in the Plasma model, and we are still working on them. The key is how to avoid taking up to ten years? You know what I mean? We need to get to the stage where we can deliver results as soon as possible.

This is our idea. We already have a lot of applications developed based on MUD that want to go online immediately. We need to prepare a mainnet for these games as soon as possible. People are waiting and ready. You need a chain that is online and running quickly to run all these applications, so that these applications can develop in parallel and get better while we solve the problems. It takes a long time to go from R&D to production stability.

It takes a lot of time to get something to mainnet and make it permissionless and robust and secure. It's been amazing to see the whole process of how we got there. That's why we need to be very agile, because there's so much going on. The whole ecosystem is moving very fast. I think everybody is delivering a lot of innovation. That's why you have to keep up, but you also can't compromise on security and performance or the system won't work.

Ben:Or technical debt. You mentioned the principle of minimal change, which was one of our core philosophies when we did the Bedrock rewrite. I talked about the entire end-to-end rewrite, but more importantly, we reduced it by about 50,000 lines of code, which is very powerful in itself. Because you're right, these things are really hard.

Every additional line of code takes you further away from production, makes things harder to be battle-tested, and introduces more opportunities for bugs. So we really appreciate all of your efforts in driving this, and especially contributing to the new operating model for the OP Stack.

tdot:The OP Stack has really created a way for you to move very quickly on these kinds of things. It's very hard to coordinate everyone because we're obviously two different companies. At Lattice, we're building a game, a game engine, and a chain.

And you're building hundreds and hundreds of things and shipping all of them on a regular basis. It's really hard from a coordination perspective.

Ben:Yeah, there's a long way to go. But that's the core beauty of modularity. To me, this is one of the most exciting things from an OP Stack perspective, not to mention the amazing games and virtual worlds that are being built on Redstone right now. It's just a really strong example from a pure OP Stack perspective of how many great core developers have come in and improved the stack, which is amazing.

This is the first time that you can significantly change the properties of the system with a key boolean. Being able to do this thoroughly, as you said, is really a long way off. But even getting close to doing this effectively requires modularity, right? For us, it was a relief to see that you guys had achieved this without, for example, rewriting L2 Geth. To me, that's proof that modularity is working.

Now it's even better. From this example, you guys have made everything into independent little modules that can be tweaked and changed. So I'm really looking forward to seeing what other new features will be integrated. I remember that we were worried that we had a fork with all the changes to the OP Stack that needed to be merged into the trunk. We were like, "Oh my god, it's going to be crazy to review everything."

We had to break it down into smaller pieces, but the whole process went very smoothly. We have a very good atmosphere with the team, so the review process was also very pleasant. It felt very natural. And I think the process went very quickly in terms of reviewing and resolving some potential issues. Everything went surprisingly smoothly.

Ben:That's great. One of our focuses this year is to create contribution paths for the OP Stack. So I really appreciate you guys participating in testing and pushing those processes. I'm glad that the processes haven't been overwhelming and that we've achieved some results. Speaking of which, I'm curious, from your perspective, where this work is going to go next? What are you most looking forward to developing next?

tdot:There are a lot of different directions of work. The main one is integration with the proof-of-fault mechanism. We have an incremental approach to decentralizing the entire technology stack and increasing its permissionless nature, with the ultimate goal of achieving features like permissionless and forced exit.

We have this ultimate goal and are gradually getting there while maintaining security. One challenge is that sometimes it's easier not to go to mainnet because then you don't need to do a hard fork. You might think, "Oh, I'll just wait until everything is completely ready and then I don't have to do a hard fork and there's no technical burden." But if you want to get to mainnet quickly, you have to deal with these complex upgrades and release them frequently. It's always a challenge to do that and keep it highly available.

I think there will be a lot of upgrades in the Plasma model after the failure proof mechanism and all these parts are ready. I think there's still some room for optimization in terms of batching commitments. Right now we're doing it very simple, one commitment per transaction. And the commitment is just a hash of the input data stored off-chain.

We're keeping it as simple as possible for now, so that it's easy and fast to review, and there's no big difference to the OP Stack. But there are some optimizations that can make it cheaper, such as batching commitments or submitting them in blobs or doing different ways. So we're definitely looking into this to reduce the cost of L1.

This is something we're very excited about. And of course, we're also very excited about all the interoperability stuff that's coming up and being able to talk between all the chains. Figuring this out will be a huge step forward for users.

A lot of this work will definitely be up to you guys to implement. But we want to figure out what this looks like in Plasma mode, with different security assumptions.

Ben:Speaking of that, this will be another test of the modularity of the OP Stack. You mentioned fault proofs, we're really looking forward to getting that live in Plasma mode. It's also a big feature in the roll-up that's going to be live on mainnet in the next few months.

One ​​of the exciting things about how we built this codebase is that, while there are some caveats, it's relatively easy to just hit a recompile button and run fault proofs in a new environment. So it's very exciting to see this implemented in practice, because it will be another example of "it just works". As one of the first teams to make large-scale changes, I'm sure it won't be completely frictionless, but it's definitely a very exciting advantage for the community to be able to try and ship fault proofs on a significantly changed codebase.

tdot:It's designed really well, and you can plug in your inputs just like you would an "oracle" and change those data sources as part of the fault proof flow. That shouldn't be too hard. Obviously you need to make sure it works throughout the end-to-end flow, but I don't think it should be too hard to ship it either. That's probably something that's going to be on the roadmap going forward.

Overall, we're very interested in making a lot of performance improvements and optimizations. There's no silver bullet, and every little problem needs to be solved piecemeal. If the entire community is working on these issues, like an army of developers working hard, then we can gradually achieve a high-performance chain, built on amazing stability.

The MUD logo

03.MUD, Redstone, and Collaboration with Optimism

Ben:I’m really looking forward to seeing your progress on MUD integration with the OP Stack. I think there’s a lot of really cool potential there. One of the most exciting things we’ll be working on over the next year or two is continuing to push forward many of the big performance and throughput improvements that have been discussed for L1 Ethereum.

There’s a lot of work being done in the Ethereum research community on this, but it’s also high-stakes territory. Some of these big changes require a testbed. One example that comes to mind is the state expiration problem. There’s no doubt that your work is amazing because it pushes the limits of how much amazing content can fit on-chain. I think one of the results we’re going to see is a real manifestation of the “state growth” problem. This basically means that the more games are played, the more content nodes need to keep track of, and the harder it becomes to execute transactions.

The Ethereum community has been working on this problem for years and coming up with solutions. I think the reason these solutions are tricky is that they fundamentally change the structure of state management. Basically, you need to provide these proofs at some point so that the state can be discarded unless someone wants to recover it.

I'm really excited because I think MUD is the perfect environment for you to actually implement these changes and make them work. You've already done a really great job on state management, and there's already a framework and a model that people follow. I'm also really excited because I think that the framework that you guys are focusing on for how to build applications on Redstone will be able to experiment under that framework and try out these really tricky improvements that will give you huge performance gains but require a new paradigm. I think you have the potential to break through in this area, so I'm really excited about it.

tdot:That's a great point. I like the idea of ​​MUD being able to abstract developers from dealing with all kinds of basic functionality. Basically, the OP Stack is the base layer, and you're just dealing with protocol primitives and things like that. Developing with MUDs is about simplifying these processes. As we move into the world of multi-chain interoperability, we're thinking about how to abstract across multiple chains. This is definitely an important question that we think about when we think about MUD and Redstone together.

So we also need to figure out what the ideal developer experience should look like. When you're dealing with all these chains, these issues become difficult to sort out, and your users will get tired of constantly switching between them. If you have a lot of L2s, it just ends up confusing people. I saw someone recently say, "I can't remember which chain my money is on." It's very complicated to keep track of your balance on each chain. We definitely need some abstraction to simplify this problem. Otherwise, it's going to be very complicated. MUDs are definitely a great opportunity to solve this problem.

Ben:Looking forward to your help. It's a lot of work, but it's super cool.

tdot:I think it's definitely a huge help for us to work with you guys because we're a very small team of only about 15 people. So, obviously it's really hard to deal with all of this. When you develop and collaborate on a Superchain, suddenly you have a massive company with all the engineering resources you could possibly need. I’m basically the only engineer at Lattice working on the Plasma model, but working with Optimism and leveraging all the other core developers greatly increases our productivity so that we can accomplish things that would normally be difficult to do independently. This flywheel effect is really awesome.

When I experienced this, it felt really powerful. I thought, “Wow, I can’t believe we just accomplished this.” It made me feel like anything is possible.

Ben: My heart is really warmed. Thank you.

Is there some philosophical basis for the security design of Plasma and how Layer 2 works?When something amazing happens and there’s debate in the community about security models, it’s often a sign that the boundaries of the technology have been pushed. When there’s something subtle and worth discussing and educating about, it usually means there’s exciting progress.

I feel like we haven't really explored the design structure of Plasma as a Layer 2 security model in depth. I'm curious about your thoughts on this. I have some thoughts about the early Plasma era and would like to hear your thoughts on this as well.

04. Define Plasma Mode

tdot:I want to introduce what Plasma mode is and what it specifically means. This is a new OP Stack feature developed by our core, which is currently in the experimental stage and includes an aspect of Plasma, which is off-chain data availability.

We call it Plasma because it advances the idea of ​​storing input data off-chain. Instead of using L1 DA, you store data on any storage service, such as AWS or IPFS. Then you need to monitor whether this data is available. At least one person is required to check whether the submitted data is available.

If the data becomes unavailable for some reason, the protocol allows users to force an exit within seven days. There are still some missing parts that are still under development, such as fault proofs and permissionless submissions that are coming soon. Users can use Sentinel[9] to automatically verify the availability of data. If the data becomes unavailable, you have to challenge it on L1.

If the data is unavailable, you have to challenge it, basically to force the data to go online or reorganize the data so that you can withdraw your funds and exit the chain. So at this stage, these components are not fully deployed. So we want to emphasize that there is still some distance to go towards the goal of full permissionlessness and accessibility, but it is gradually moving forward.

There are also some assumptions about the cost of users challenging data to withdraw funds. These things are still being defined, and we are optimizing these items to make them ultimately cheaper and more accessible. We are working on a roadmap for this. This is different from the plans to deploy fraud proofs and decentralized sorters in the OP Stack roadmap.

One ​​problem with this protocol is the Fisherman's Dilemma[10], which is that you need an honest "fisherman" to be online at all times, because if no one is online, you don't know if the data becomes unavailable, you can't withdraw funds during the withdrawal window, and the chain could be attacked by the operator.

The Fisherman's Dilemma

You can solve this in a number of ways. You can incentivize people to stay online through reward mechanisms, create a strong community, and ensure that stakeholders who have a large investment in the chain, such as users who run bridges or liquidity providers, stay online and keep the chain and the operator honest. These users should stay online and challenge when problems arise. Obviously, this is a very interesting topic because there are many ways to solve this dilemma, and there is a lot of work to be done to make this system accessible to anyone and ensure that users continue to pay attention to the maintenance of the chain.

Ben:What is Layer 2? It's a blockchain that uses Layer 1 more efficiently. The classic analogy is, “You don’t go to court to cash a check, you go to court when a check bounces.” That’s really the fundamental design idea behind these optimistic systems, and that’s how we think about roll-ups: using the blockchain more efficiently. By only using L1 when there’s a dispute, you can increase the total throughput of the blockchain. I think that’s also a great analogy for the Plasma pattern. The Plasma pattern basically extends the idea of ​​a roll-up to not only resolve withdrawal disputes, but also require the availability of the transaction data itself.

I think this is going to be a very powerful tool because by doing this, you can use Layer 1 more efficiently, and process more data in a Layer 2 system at a much lower cost than just using a roll-up. So that’s very exciting. More importantly, it enables you to improve upon the existing state, which is not possible without the Plasma pattern.

Of course it’s not perfect. There’s the Fisherman’s Dilemma, which puts some fundamental requirements on the whole system. Fundamentally, what’s most exciting about Plasma over other Alt DA systems is that it turns the safety tradeoff into a liveness tradeoff.

In other systems, you can’t go to court to resolve data issues. The data exists by default. This means that if the data doesn’t exist, you don’t have enough evidence to prove the status quo in court, and you’re stuck. In contrast, the Plasma model makes a good tradeoff by adding a new form of challenge that avoids data loss and makes the community pay to publish data to L1.

During dispute resolution, you may not know the state of the chain, but it’s much better to make a liveness tradeoff than a safety tradeoff, because the liveness tradeoff means you may not know the state of the chain for a period of time, while the safety tradeoff means you don’t know whether a withdrawal from the chain is valid, potentially allowing someone to make an invalid withdrawal. This is how I think about the Plasma model.

It expands on the idea of ​​“don’t go to court to cash a check, go to court when the check bounces” and uses it to improve the tradeoffs when using Alt DA. This way, even if funds can be lost, you only have situations where the state of the chain is temporarily unknown and users are required to pay for data publishing. I think this is a very exciting tradeoff.

tdot:We do take some risk in adopting the word "Plasma" because it has a lot of historical baggage. The problem is definition. When we announced the Plasma model and deployed it to mainnet, many people might have thought it was pretty much the same as what Vitalik et al described.

In reality, this is still the OP Stack. When we brought these Plasma-like features to the OP Stack, we did not reinvent the OP Stack. We still kept the safety assumptions of the OP Stack and added off-chain data availability (DA) on top of it. What we borrowed from Plasma is that users can challenge data and verify its availability, and if the data is not available, force the DA provider to submit it on-chain, or reorganize the data for exit. Our safety assumption is that no matter what happens, users will be able to force an exit or withdraw funds, even if the chain operator or DA provider is a malicious node.

To ensure this is the case, there are many steps required. The idea is to ensure the most basic parts first, and gradually develop on some of the guarantees of the OP Stack, gradually achieve decentralization, and gradually introduce these guarantees. We already have many frameworks for evaluating the security of rollups, built by L2Beat and others, which are very useful to the community.

But Plasma itself does not fully fit this model. The problem is that if you try to fully adapt the Plasma model to the framework of rollups, it does not fully match at all stages.

We still need to implement some of these features. So what needs to be clear is the specific roadmap and implementation methods. I think these things are still being defined and refined. It is very meaningful for everyone to discuss these issues together, find out what they mean and come up with definitions together.

Ben: Yeah, I totally agree with that. Progress on things like proofs of failure is indeed important, but you’re right that Plasma’s security model requires a new framework that’s unique to rollups. If you’re bullish enough on scaling Ethereum that there’s really no other choice, you need an alternative data availability (DA) solution.

The reality is that rollups have obvious tradeoffs. They’re easier to build, which is why the scaling community started building them in the first place. But if you want a truly horizontally scalable blockchain system, you can’t be limited by the data throughput of your L1. And if you only use rollups, you’ll be limited. So once you understand that the goal is to push blockchain to global scale, you need an alternative data availability solution.

I mentioned earlier that Plasma is the best we can do for Alt DA’s Layer 2, but it also has tradeoffs. We need to communicate these clearly - if this data availability provider is down, funds are lost. But with Plasma, what we actually need to communicate is, "If this data availability layer goes down, users need to pay L1 issuance fees, and those fees are not reclaimed." To understand the security model of Plasma, you would say, "This is the DA provider, and this provider can go down, and it costs X dollars a day for the community to keep the system secure." Then you might multiply that cost by the time of the exit window and say, "If there's a malicious DA provider, the net cost is going to be X dollars, which is basically the cost of doing these challenges until people can get their funds out." It's a very subtle question that is bound to lead to a lot of discussion about tradeoffs. You can obviously have more complex sources of DA, which will increase the cost of attacks and reduce the likelihood of burning. At the same time, this will also increase the cost of the system. So ultimately, as stewards of this technology, we need to lay out these tradeoffs very clearly. I think you're right that DA providers will naturally have an incentive not to put us in a fisherman's dilemma where they end up not being able to get their funds out and just letting other people burn their funds. This is probably one of my favorite Layer 2 scaling debates. It was one of the original debates — before we realized “we may need to do this eventually, but we can get around this by publishing data to L1.” So it’s great to see this topic back in the public eye.

I think over the next year we’ll see a big increase in community understanding of this issue.

Coming full-circle, from OPCraft to Biomes on Redstone

05. Chain Standardization and Moving Towards the Future

tdot:We need better tools to validate chains and guarantee data availability, ensure the correctness of outputs, and have comprehensive checks to ensure that at least one person is validating by default.

The more people validating, the more valuable the chain will be. So if we can make it cheaper and easier for people to run these validators, then we can pool the community resources and make sure there's always someone to challenge and validate. That's one of the important steps to improve security and decentralization.

It's really great that we can work together on these things. It's getting more attention, more ideas, more discussion, and more testing on the protocol. I think Plasma mode will be run by more people, and more people will discover and experience it. So running your protocol and knowing that more people are running it will add more scrutiny and real-world testing to the protocol. Eventually, we will find some very solid solutions. So I'm very excited about that. If we were just developing this protocol in our corner, the experience would be completely different.

Ben:So that's why this approach is so good to help us figure out the problem. We realized that standardization is critical to the OP Stack. We need to provide a unified and easy-to-understand way for people to run these chains while maintaining the security properties they claim to have. Because one challenge is that an external team may make changes that seem innocuous, but in fact may have a huge impact on the security, performance, or overall behavior of the system. From our perspective, standardization is a powerful tool. Through community discussions, we can not only get a wide range of opinions, but also form a set of standards that enable everyone to operate and communicate responsibly.

The security model provided by L2Beat is an invaluable public resource. At present, it is still very customized and fragmented. What we need is to put it into Plasma mode when compiling or deploying an OP Stack version, and the system can output the security assumptions you have made. Therefore, standardization is critical. You are right that if everyone develops in their own small environment and there is no unified standard implementation, these problems will be multiplied.

tdot:It is really great that there are already stakeholders and applications running it. Once it enters the production environment, you can have a deeper understanding of the needs of users. You know who is using this chain, who is deploying it, and then you can talk to them and ask, "What do you expect? What do you need? How much are you willing to pay for it? Is this price right?" That way, you can get real feedback instead of getting into endless discussions and not really understanding the problem.

The whole point of game theory is that you have to test it in the real world. Otherwise, you will never know the real effect. Although you can speculate, there will always be surprises. So I think you need to iterate and test experiments in a relatively safe environment. This is also very interesting. Just like having different layers of security; there are chains that have higher security standards, and there are chains that are at the forefront of technology and can do bold experiments.

These chains may be cheaper and perform better, but they are also more risky. You can experiment and drive innovation from them. So there is a risk and reward to be the first users on these chains that are pushed to the limit. This is also something we have spent a lot of time thinking about this year.

Ben:The collective is also exploring the path of contribution. I think you said it very well. You need to make a trade-off between testing new improvements in a real-world environment and running a proven, secure chain. We see OP Stack as an open source enabler of this process, where people develop amazing technology at the cutting edge, prove it works, and then come back and merge it into the standard so that everyone can benefit.

This fits perfectly with the idea of ​​positive-sum games, open source, and growth. You're absolutely right. To push the cutting edge, you have to make trade-offs. It's critical to build processes that allow us to benefit in important moments like the release of Redstone, while solving the problem of flexible improvements in the Ethereum scaling area, which may have another decade of development. We need these proven experiments to be clearly defined and merged into the standard.

We are very excited to embark on this journey with you all.

tdot:I think being part of a Superchain despite these differences is really interesting in terms of sharing revenue and incentivizing people to experiment and deploy new chains, while also benefiting the entire community and a variety of different implementations.

It's a really nice model, unlike people running forks in their own corners, which are hard to track and prone to security issues. Here, you have a framework that can verify and check what people are doing. That's definitely a huge advantage. I think it's just naturally formed. It's been really fascinating to see how it's developed over the past year.

Ben:Positive-sum game, man, we have to keep pushing the envelope. Ultimately, in the long run, these should be viewed as extensions of Ethereum. One really cool thing that's happened over the past year is the rollup improvement process that's come online, which basically connects the core developers of Layer 1 and Layer 2.

In the future, we will see Layer 2 gradually adopt some of the important EIPs that everyone is very eager to implement on Layer 1. Layer 2 is a great testing ground, and improvements start from some random fork, then merge into the OP Stack and eventually release.

Eventually, these improvements will land on Layer 1 and everyone will cheer for it. This will be very cool. It's a bit like turning Ethereum into an organism, and the Ethereum codebase is its DNA.

tdot:That makes sense too.

Ben: Yeah, it's awesome. Speaking of the amazing Redstone chain, tdot, are you excited about what's happening on Redstone right now?

tdot:Yeah, we have the most amazing applications. Honestly, I'm always blown away. I've been playing This Cursed Machine[11], which is the craziest application running on Redstone right now. It's really awesome, especially when people unleash their creativity and make things that haven't been done before.

Ben:Is this the first horror game on-chain? I'm not sure if I've seen anything like This Cursed Machine before.

tdot:I don't know. That's a good question. I think putting these experiences on-chain really pushes your boundaries. I do love that people are making these brand new games instead of just porting existing games to chain.

Ben:I don't want to use the classic venture capital analogy to the early internet era, but I do feel like autonomous worlds on Redstone are really leading the way. To put it in context, when the internet first came out, people's instinct was to move existing things online, like making a newspaper digital.

Real innovation happens when you realize how to exploit the power of a new system. Really, newspapers on the internet are not as valuable as when everyone has their own 240-character little newspaper. To me, this is very similar to the innovation space on Redstone. The community right now is pushing the boundaries of full-chain games and worlds and exploring how to push them forward.

tdot:Yeah, we're super excited. This environment really attracts a super active community where people can push ideas to the limit. It's a big improvement over a purely speculative mentality. I think the idea of ​​playing games with friends is also very heartwarming and attracts a lot of really nice people.

Now it's time to have fun, man. I don't think we're quite ready for all the new stuff that's coming, so I'm really looking forward to it.

Ben:Plasma is back. Long live Plasma.

tdot:We are very excited and construction has just begun.

References

[1]Plasma Mode: https://specs.optimism.io/experimental/plasma.html

[2]tdot: https://twitter.com/unsafetdot

[3]Redstone: https://redstone.xyz/

[4]Optimism: https://optimism.io/

[5]Ben Jones: https://twitter.com/ben_chain

[6]MUD: https://mud.dev/

[7]OPCraft: https://lattice.xyz/blog/making-of-opcraft-part-1-building-an-on-chain-voxel-game

[8]Biomes: https://biomes.aw/

[9]Sentinel: https://github.com/latticexyz/sentinel

[10]Fisherman's Dilemma: https://github.com/ethereum/research/wiki/A-note-on-data-availability-and-erasure-coding

[11]This Cursed Machine: https://thiscursedmachine.fun/

More news about redstone chain

  • May 08, 2024 10:56 pm
    Redstone Chain increases official address owners to 3, and plans to upgrade to 3-5 multi-signature this week
    Redstone Chain posted on the X platform that it has increased the number of owners on the Redstone SystemOwnerSafe address (0x70FdbCb066eD3621647Ddf61A1f40aaC6058Bc89 on the Ethereum mainnet) from 1 owner to 3 owners (2-3 multi-signatures), and plans to upgrade to 3-5 multi-signatures later this week. All new owners are anonymous and have new hardware wallet keys, and all funds are safe.
  • May 08, 2024 11:14 am
    Redstone Chain To Discontinue Lattice And Redstone Holesky Testnets
    According to Foresight News, Redstone Chain, an L2 network focused on the full-chain gaming ecosystem developed by Lattice, has announced that it will discontinue the Lattice testnet and Redstone Holesky testnet. Users are advised to migrate to the Redstone Garnet Holesky testnet within the next month. It was also reported that Redstone had launched its mainnet in early May.
  • May 07, 2024 9:32 pm
    Redstone Chain: Please migrate to the Garnet Holesky test network within 1 month
    Optimistic Rollup L2 project Redstone Chain posted on the X platform that it will gradually abandon the Lattice testnet and Redstone Holesky testnet. Users have 1 month to migrate to Redstone's Garnet Holesky testnet, which has been online for several weeks. Officials call on users to ensure that the migration is completed. Earlier news, Redstone recently announced the launch of the mainnet. In May this year, Redstone will release games including Biomes, DEAR, DF Archon, Downstream, GG Quest, Sky Strife, This Cursed Machine, and words3. At the same time, Redstone also launched a series of activities, which will last from May 1st to May 15th.
  • May 05, 2024 11:33 pm
    Dark-Style Global Development Game DEAR Set To Launch On Redstone Chain
    According to Foresight News, the dark-style global development game DEAR, created by the ARPA team, is set to launch on the Redstone Chain in mid-May as one of the first batch of projects. Players can interact with life forms through simple operations such as Feed, Abuse, Stay, or Transfer, and the form of life will change according to the behavior of global players. DEAR is a finite game, with life forms existing in a limited number of blocks. Additionally, DEAR includes clever game rules and is visually designed by artists Poppel and Ellwood. Previously, DEAR had been tested on the Redstone test network in March, with over 1500 users participating. Redstone is an Optimistic Rollup L2 project created by the Lattice team, who have previously worked on the Dark Forest and Mud game engines. DEAR is a dark-style global development game based on the MUD game engine, aiming to explore the human game in intimate relationships and the boundaries of complex smart contracts.
  • May 02, 2024 10:16 pm
    Redstone Chain Officially Launches Mainnet, Announces Upcoming Games and NFT Market
    According to Foresight News, Redstone Chain has announced the official launch of its mainnet. The company also revealed plans to release several games in May, including Biomes, DEAR, DF Archon, Downstream, GG Quest, Sky Strife, This Cursed Machine, and words3. Additionally, Redstone will introduce the Redstone Market, an NFT marketplace built by Reservoir, and relay.link, a fast cross-chain money transfer service that supports L2 to L2 cross-chain. Other upcoming features include DEX Redswap, which runs Uniswap v3 contracts, Safe, Randcast, Rainbow custom network support, Privy, and support for Metamask and Walletconnect. Redstone's Plasma Mode has been merged into the Optimism codebase and submitted to Optimism Superchain. The company is also planning to host the Redstone Worlds Expo, which users can visit on the Redstone community website between May 1 and May 15. Furthermore, users can mint an official Redstone NFT on the Redstone community website before 00:00 Beijing time on May 3.
  • May 02, 2024 12:01 am
    Redstone Network Launches to Power Fully On-Chain Ethereum Games
    Powered by the Optimism OP Stack, Redstone enables “autonomous worlds” and other intriguing new types of on-chain Ethereum games. source: https://decrypt.co/229005/redstone-network-launches-fully-on-chain-ethereum-games
  • May 01, 2024 4:14 am
    Getting Ready for Redstone
    Onchain gaming is leveling up with the launch of the Redstone L2. source: https://www.bankless.com/getting-ready-for-redstone
  • Apr 05, 2024 3:39 am
    Redstone announces mainnet launch to boost on-chain gaming ecosystem
    OP Chain Redstone announced the launch of its mainnet on May 1st, which will introduce a suite of on-chain applications and autonomous worlds, following a concerted effort by eight teams preparing their projects for the public debut. The OP Chain is built by Lattice, an engineering and product-focused company pushing the envelope of Ethereum applications and infrastructure.Among the releases... source: https://cryptobriefing.com/redstone-mainnet-launch-insights/
  • Nov 15, 2023 7:35 pm
    Ethereum Application Developer Lattice Launches L2 Redstone for On-Chain Gaming
    According to Foresight News, Ethereum application development company Lattice has announced the launch of L2 Redstone, a solution designed for on-chain gaming and virtual worlds. The platform is now live on the Redstone Holesky testnet. Redstone is an Alt-DA chain inspired by Plasma and built on the OP Stack.
  • Nov 15, 2023 6:04 pm
    Optimism Ecosystem to Get Alt-DA Chain Redstone Developed by Lattice Team
    According to CoinDesk, the Optimism blockchain ecosystem will soon have its own alternative data availability (alt-DA) chain called Redstone, developed by the Lattice team. Currently operating as a test network, Redstone aims to provide a cost-effective solution for on-chain games and decentralized applications. Optimism is an ecosystem of layer-2 chains designed for cheaper and faster transactions, including the original OP Mainnet, which is currently the world's second-largest layer-2 network in terms of value deposited. The affiliated networks use optimistic rollup technology to settle transactions on the main layer-1 Ethereum blockchain. In October 2022, OP Labs developers released OP Stack, allowing developers to create their own blockchains. Redstone will operate like a traditional optimistic rollup, but instead of posting the input state to layer-1, it will post a data commitment hash, with the input state corresponding to the input commitment stored off-chain by a data availability provider. Redstone is considered a plasma rollup blockchain, according to the Lattice team. The team plans to join the Optimism ecosystem and contribute to the OP Stack as core developers. Data availability has been a significant topic of discussion in the Ethereum ecosystem, as developers seek ways to store and provide consensus on blockchain data availability for transactions without adding to on-chain congestion. Other solutions, such as data availability layers like Celestia and Avail, have emerged to address the challenge. Near Foundation, a non-profit organization supporting the Near blockchain, recently announced the rollout of a new NEAR DA, where posting data could be 8,000 times cheaper than posting on Ethereum.

More news about redstone chain

0 Comments
Earliest
Load more comments