Skip to main content

2 posts tagged with "MaaS"

View All Tags

The Core of StarLandAI’s DePIN:Proof of Computation

StarLandAI
StarLandAI
Maintainer

blog3-1

What are Verifiable Computing and Proof of Computation?

Verifiable Computing is a computational paradigm that allows computers to delegate the computation of specific functions to other untrusted clients while ensuring that the results obtained can be effectively verified. These clients, upon completing the relevant computations, provide proof that confirms the correctness of the computation process. With the significant advancement of decentralized computing and cloud computing, the scenario of outsourcing computational tasks to untrusted parties has become increasingly common. It also reflects a growing demand for enabling devices with limited computational power to delegate their computational tasks to third-party, more powerful computational service platforms. The concept of verifiable computing was first proposed by Babai et al. [1] and has been explored under various names, such as “checking computations” (Babai et al.), “delegating computations” [2], “certified computation” [3], etc. The term “verifiable computing” was explicitly defined by Rosario Gennaro, Craig Gentry, and Bryan Parno [4].

Proof of Computation (PoC) is a cryptographic protocol that allows a verifier to confirm that a computational task has been correctly executed without having to re-execute the entire computation process. The core idea of this protocol is that the executor of the computation task provides a brief proof, which is compact enough to be efficiently verified while also conclusively demonstrating the correctness of the computation. In PoC, the executor first computes the input data and generates an output result. Then, they create a proof that contains sufficient information to verify the correctness of the output result without revealing the input data or the specifics of the computation. The verifier can use this proof to confirm the correctness of the computation without knowing the specific computation process or the original data. Proof of Computation has applications in multiple fields, such as:

  • Cloud Computing: In cloud services, customers may wish to verify that their data is being processed correctly without disclosing the data itself. PoC allows cloud service providers to provide proof that they have correctly executed the computation task.

  • Distributed Systems: In a distributed computing environment, nodes may need to verify the computational results of other nodes to ensure the consistency and reliability of the entire system.

  • Blockchain: In blockchain technology, PoC can be used to verify the execution results of smart contracts, which is crucial for ensuring the security and transparency of decentralized applications.

  • Privacy Protection: PoC can be used to protect personal privacy as it allows the verification of the correctness of computations without disclosing the original data.

Verifiable Computing is a broad field that encompasses a variety of technologies and applications, and Proof of Computation (PoC) is a key technology within this field, used to achieve the verifiability of computations. PoC is a component of verifiable computing, and together they support a more secure and trustworthy computing environment.

Mainstream Proof of Computation Principles and Technologies

2.1 Proof of Computation (PoC) based on Zero-Knowledge Proofs

(1) Proof of Computation (PoC) based on Zero-Knowledge Proofs is a cryptographic method that allows a prover to demonstrate to a verifier that a computational task has been correctly executed without revealing the specifics of the computation or any sensitive data. The core advantage of this method lies in privacy protection and enhanced security, as the verifier only needs to know whether the result is correct, not how it was achieved. The main technical process is as follows:

  • Define the computational task: First, it is necessary to clarify what the computational task to be verified is. This could be a mathematical function, an algorithm, or any other type of computational process.

  • Generate the proof: The prover performs the computational task and generates a zero-knowledge proof. This proof is a cryptographic structure that contains sufficient information to prove the correctness of the computation without including any sensitive information about the computational inputs or intermediate steps. Zero-knowledge proofs typically rely on complex cryptographic constructs such as elliptic curves, pairings, or zero-knowledge SNARKs (Succinct Non-Interactive Arguments of Knowledge).

  • Verify the proof: Upon receiving the proof, the verifier runs a verification algorithm to check the validity of the proof. If the proof is valid, the verifier can be confident that the computational task has been correctly executed without knowing the specific computational details.

  • Maintain privacy: Throughout the process, the prover does not need to disclose any information about the computational inputs. This is crucial for protecting data privacy and preventing potential data leaks.

(2) There are various technical approaches to implementing PoC based on zero-knowledge proofs, including:

  • zk-SNARKs: This is a special type of zero-knowledge proof that provides properties of succinctness, non-interaction, and knowledge of proof. zk-SNARKs allow the prover to generate a short proof that the verifier can verify offline without interacting with the prover.

  • zk-STARKs: This is a zero-knowledge proof that does not require a trusted setup, offering transparency and scalability. Compared to zk-SNARKs, zk-STARKs do not rely on complex mathematical puzzles, making them easier to implement and verify.

  • Bulletproofs: This is a new type of zero-knowledge proof that provides efficient verification while protecting privacy, particularly suitable for blockchain applications involving transaction amounts.

2.2 Proof of Computation based on Trusted Hardware

(1) In contrast to the purely software implementation of zero-knowledge proofs, we can also implement PoC based on Trusted Hardware, which is a method that utilizes physical security features to ensure the correctness and security of the computation process. The implementation typically involves hardware security modules (such as secure processors, cryptographic cards, or Trusted Execution Environments TEE), designed to provide an isolated and secure execution environment, resistant to external attacks and unauthorized access. The main technical process is as follows:

  • Build secure boot for hardware and applications: Secure boot is a process that ensures only authenticated, unmodified software can be executed on the hardware. This is a fundamental step in ensuring hardware security.

  • Agree on cryptographic anchoring: When using trusted hardware, computational proofs are often combined with cryptographic anchoring. This means that the results or evidence of computation are associated with a cryptographic key, which is protected by trusted hardware.

  • Compute based on Trusted Execution Environment (TEE): TEE is a combination of hardware and software that provides a secure execution environment, protecting the code and data loaded into the TEE from external attacks and tampering. TEE typically includes a secure processor and an isolated memory area.

  • Verify computation through remote attestation: Remote attestation is a mechanism that allows the authenticity and integrity of the TEE to be verified remotely. Through remote attestation, a client can be assured that it is interacting with a genuine, unmodified TEE.

(2) The advantage of PoC based on trusted hardware lies in their provision of a physically secure computing environment, which is theoretically very difficult to breach. They also offer high performance.

blog3-1

StarLandAI’s proof of computation

3.1 Overall Process

To provide a reliable and scalable computational power infrastructure, StarLandAI has implemented a complete set of hardware authentication and proof of computation mechanisms by combining secure hardware with cryptographic algorithms. The overall process is illustrated in the figure below:

  • During the startup phase of the computational node, a self-check of the device is performed to inspect the status of components such as the GPU and CPU, as well as the versions of their drivers.

  • The computational node daemon verifies the hash of the StarLand runtime image.

  • The computational node daemon launches the StarLandAI runtime. If a trusted execution environment (TEE) is available, it will initiate the runtime based on the TEE.

  • The StarLandAI runtime conducts a consistency check and loads the model.

  • Once launched, the StarLandAI runtime checks its own operating environment, loads the model, identifies the certificate and device information, generates a runtime authentication report, and sends it to the StarLandAI DePIN Master in the form of a heartbeat.

  • The StarLandAI DePIN Master validates the runtime information based on the received report and completes the node access procedure.

  • For a computational power assessment and inference task, the StarLandAI DePIN Master encrypts the task parameters and challenge values using the public key of the runtime and issues them.

  • The runtime decrypts the task information to generate a runtime challenge response and a model-specific call challenge value, then calls the model to obtain the inference result.

  • The runtime verifies the model challenge response value and the inference result. It constructs a single-call computational proof using the runtime challenge response generated in step 8 and returns it to the StarLandAI DePIN Master. Upon receiving the response, the StarLandAI DePIN Master completes the check and results, concluding the entire process.

blog3-1

3.2 Composition of Computation Proof

StarLandAI’s algorithm for generating computation proofs is an innovative solution designed to optimize the utilization of computational resources. The algorithm not only intelligently evaluates the computational capacity of each computational node to ensure the most suitable computational tasks are matched, but it also takes into account the computational throughput of the nodes to maximize efficiency and performance. Moreover, what sets StarLandAI apart is its in-depth analysis of the model capabilities supported by the nodes, allowing us to accurately schedule complex computational tasks, especially those advanced applications that require specific model support. With this comprehensive consideration, StarLandAI can significantly enhance the execution speed and accuracy of computational tasks while reducing operational costs. Our computation proof generation algorithm is the core that drives efficient, intelligent, and scalable management of computational resources, providing unparalleled support for AI and machine learning workloads. StarLandAI is committed to leading the future of computational resource management through cutting-edge technology, unlocking infinite possibilities. StarLandAI computation proofs are divided into two categories:

  • Runtime Verified Report: A periodic assessment proof for an integrated computational node.
  • Proof of Inference Computation: A workload assessment proof for a specific inference task.

(1) Runtime verified report

The Runtime Verified Report is a periodic assessment proof for an integrated computational node. After the node completes self-inspection and initialization, it will periodically report its heartbeat, which must include the Runtime Verified Report. The specific structure of the Runtime Verified Report includes the following content:

  • Node Identity Address (associated with the identity certificate)
  • Node Computational Power Score (the calculation formula will be provided later)
  • Node Computational Power Equipment Information
  • Node Geographic Distribution Information
  • Node Identity Authentication Signature
  • Hardware Authentication Report of the Runtime

The node identity corresponds to a pair of public and private keys. StarLandAI will receive the node-related registration identity certificate information to support the verification, encryption, and authentication of the subsequent computational process. At the same time, the computational power equipment information, computational power score, and geographic distribution information will support StarLandAI in selecting the optimal computational power for scheduling during subsequent inference tasks. Each heartbeat report requires a node identity authentication signature to prevent impersonation by malicious parties.

(2) Proof of inference computation

Proof of Inference Computation is a proof of computational contribution for a specific inference task, which specifically includes the following content:

  • Computational Node Information
  • Hardware Authentication Report of the Computational Runtime
  • Task Challenge Response Value and Signature
  • Hash of the Model Snapshot Corresponding to the Task
  • Node Computational Power Score Involved in This Task

Appendix

Computation Power Score = S(Computing_Card_Count×Single_Card_Inference_Throughput×Deployed_Model_Scale×Model_Count) Where:

  • ScoreScore: Represents the final score or performance metric.
  • S: Is a function that normalizes the product of several factors into a standardized score.
  • Computing_Card_CountComputing_Card_Count: Indicates the number of computing devices (such as GPUs or TPUs).
  • Single_Card_Inference_ThroughputSingle_Card_Inference_Throughput: Refers to the ability of a single computing device to process model inferences within a unit of time.
  • Deployed_Model_ScaleDeployed_Model_Scale: Represents the measure of the scale or complexity of the deployed model, which correlate with the number of model parameters or computational requirements.
  • Model_CountModel_Count: Denotes the total number of models deployed in the computational environment.

Reference

  1. Babai, László; Fortnow, Lance; Levin, Leonid A.; Szegedy, Mario (1991–01–01). “Checking computations in polylogarithmic time”. Proceedings of the twenty-third annual ACM symposium on Theory of computing — STOC ’91. STOC ’91. New York, NY, US: ACM. pp. 21–32. CiteSeerX. 10.1.1.42.5832. doi: 10.1145/103418.103428. ISBN 978–0897913973. S2CID 16965640.

  2. ^ Goldwasser, Kalai, Yael Tauman; Rothblum, Guy N. (2008–01–01). “Delegating computation”. Proceedings of the fortieth annual ACM symposium on Theory of computing. STOC ’08. New York, NY, US: ACM. pp. doi: 10.1145/1374376.1374396. ISBN 9781605580470. S2CID 47106603.

  3. ^ Jump up to:(a) (b) “Computationally Sound Proofs”. SIAM Journal on Computing. 30 (4): 1253–1298. CiteSeerX 10.1.1.207.8277. doi: 10.1137/S0097539795284959. ISSN

  4. Gennaro, Rosario; Gentry, Craig; Parno, Bryan (31 August 2010). Non-Interactive Verifiable Computing: Outsourcing Computation to Untrusted Workers. CRYPTO 2010. doi: 10.1007/978–3–642–14623–7_25.

blog3-1

StarLandAI: The First AI MaaS DePIN Network

StarLandAI
StarLandAI
Maintainer

StarLandAI is the first AI MaaS DePIN network that supports all types of large multimodal model applications. It is the first GenAI Model-as-a-Service (MaaS) DePIN network, capable of running large multimodal models using any type of computing device.

Why do we need DePIN? As we all know, deploying, training, fine-tuning and managing multimodal large models, integrating text, images, sound, databases and distributed cloud-native systems, is highly complex. AI computing spans server capabilities like H100, A100, and consumer-grade power such as 4090, 3090, and 3080, integrated graphics, and CPUs, making unified management challenging. On the other hand, running large models efficiently on low-end compute resources such as 3090, integrated graphics, and CPUs is highly challenging, leading to idle resources. The advantage of StarLandAI from the current DePIN network is that the current DePIN networks, despite integrating substantial AI computing power, lack adequate support for developing, deploying, and maintaining generative AI applications, leading to limited real-world usage. StarLandAI’s vision is Harness All Idle Compute Resources into DePIN Layer, so StarlandAI can Innovate GenAI DePIN Layer for Blockchains, through simplifying AI Development with One-Click APIs.

How can StarlandAI become the first AI MaaS DePIN network? StarLandAI supplies lower barriers for AI developers, bypassing concerns about computing power and multimodal model complexity. StarLandAI enables large models on low-end compute such as 3080, 3090 and CPUs, increasing earnings and opportunities for compute providers. So more Web2 AI users can be attracted to blockchains, enhancing its practicality.

blog6-1

The architecture of StarLandAI can provide multimodal large models through cloud services and APIs, including text, voice, image, and video, etc., allowing for scalability, ease of access, and flexibility. StarLandAI can utilize microservices, containers, immutable infrastructure, and declarative APIs to ensure the rapid and resilient deployment of GenAI services on any type of computing device such as 4090, 3090, 3080, integrated graphics, and CPUs. StarLandAI can also assist developers in creating GenAI applications such as AI avatars, image, voice, music, and video generation with GPT-level quality, compatible with blockchains including Solana, Ethereum, and Bitcoin.

So the first AI Dapp on StarlandAI is AI avatars, which can combine multimodal large models. In StarLandAI, you can easily create your on-chain AI avatar and turn it into a digital asset on blockchain, from which you can obtain steady earnings from digital persona’s ongoing services. For the holder of computing power, we supply running AI avatars on DePIN devices, and you can contribute computing devices such as PCs, mobiles, GPUs, etc., to access the network for more benefits.

So, let’s take a look at how to chat with AI avatars together.

Go and talk to your favourite AI avatars

You can find each AI avatar running on Starland on the “All Avatars” page of Starland. There are cute and clever soft girls, overbearing CEOs, handsome straight-A students, and even AI avatars like Trump. You can chat with them about anything at will, and you will find that you seem to be talking to the real Trump.

Not only can you chat, he may also reply to you with voice.If you are a little luckier, you can also receive his emoji pack. Isn’t it very fun? There are hundreds of characters for you to choose from, and you can enjoy yourself to the fullest. You can let the young girl accompany you to chat, or you can also find Yichan the little monk to solve your worries and doubts.

blog6-1

Create your own AI Avatars.

On StarLandAI, everyone can create their own AI digital person. You can customize the personality, characteristics, background, appearance, and even the voice of the AI digital person. With such a rich form, creating an AI digital person only requires two steps.

Even if only one step is needed, you can completely let the AI help you generate the background information, appearance, and voice of a character. If you think it is good, just click to confirm, and you can have a customized AI Avatar. You can even use your own voice as the voice of the AI Avatars.

In this process, StarLandAI uses deep learning techniques such as CNN, RNN, and Transformers, integrating data from multiple sources such as text, images, audio, and video, to enhance the understanding and adaptability of AI avatars. So you will get a very realistic AI avatar.

blog6-1

Let your PC make money for you.

The difficulty of GenAI models, deploying, training, fine-tuning and managing multimodal large models, integrating text, images, sound, databases and distributed cloud-native systems, is highly complex. On StarLandAI, now you can use your PC device to participate in the training and reasoning of AI avatars. Running large models efficiently on low-end compute resources such as 3090. So everyone can participate in it.

On StarLandAI, your DePIN devices support running large-scale models with multiple modules: Enabling various devices, such as PCs, smartphones, IoT (IoT) devices, edge computing nodes, etc., to execute complex models involving multiple modalities, such as text, images, and audio. So you can obtain stable returns by providing computing power.

StarLandAI’s vision is to support harnessing all Idle compute resources. It integrates various unused computing powers, including GPUs such as the 4090, 3090, and 3080, as well as computing from PCs, edge devices, and mobile platforms, transforming them into a versatile resource pool for multimodal large-scale models. StarLandAI takes advantage of Solana’s high efficiency and good ecosystem to create a complete narrative network of its own, and builds a complete network of various roles such as computing power providers, AI avatars creators, and AI users. StarLandAI has innovated AI Layer2 for Blockchains and brings Web 2.0 AI users to blockchain.

Just online for a week, StarlandAI has already been used by more than 10,000 people, causing a big wave in the Solana ecosystem. The website (www.starland.ai), on par with c.ai and GPT in response speed and multimodal capabilities, is live. Welcome everyone to try it out. In the early stage, there are various types of points given away.