Overall process
1. Create the ipfs node
-
Create an IPFS node on the local computer using ipFS init
-
Some of the commands in this article have already been executed without reinitialization. Some pictures are copied from the previous document. The specific information should be subject to the actual object
$ ipfs init initializing IPFS node at /Users/CHY/.ipfs generating 2048-bit RSA keypair… done peer identity: QmdKXkeEWcuRw9oqBwopKUa8CgK1iBktPGYaMoJ4UNt1MP to get started, enter:
ipfs cat /ipfs/QmVLDAhCY3X9P2uRudKAryuQFPM5zqA3Yij1dY8FpGbL7T/readme Copy the code
ls blocks datastore version config keystore $ open ./
- After the ipfs init command is executed to initialize the node, an. Ipfs folder is generated to store information such as node IDS, environment configuration information, and data stores
- If you’re on a MAC, use Shift + Command +. To view hidden files
- Run the ipfs ID command to view the id of the created node
2. Start the node server
- Run the ipfs daemon command to start the node server
- The current interface is in the listening state. You need to create a TAB page
3. Simple verification
-
Run the following command to perform a simple test
ipfs cat /ipfs/QmYwAPJzv5CZsnA625s3Xf2nemtYgPpHdWEz79ojWnPbdG/readme
- Enter the following url in your browser:http://localhost:5001/webuiYou’ll see a nice
UI
interface
Related problems in detail
The storage location of ipFS
- IPFS data storage, individual user data stored on their own personal hard disk, also known as local hard disk storage. After storage, it will broadcast in IPFS network, “I store hash as Qm… Because of the uniqueness of the hash, if the data is divided in a certain way, then the same data will only have one copy in the network storage, that is, only stored in the local node. When the data is retrieved by a user, the hash value of the retrieved data is the key. The node first queries whether the key is present in the DHT table (key/value storage). If the key is not present, the node searches the K bucket closest to the key in xor. Otherwise, it returns the most likely node that it thinks has a value, and recurses to find the value for the key. It then requests the node to establish a connection with the value (that is, the node ID) and requests the data, while storing the key/value pair in its OWN DHT table. The request node stores the received data into the IPFS cache, and the data retrieval succeeds. The request node can also provide ipFS network with the cached data as a backup of the original data within the validity period of the cached data.
Ipfs redundancy backup measures
- IPFS uses Erasure coding for redundancy. A cluster has N copies of original data and M copies of parity data, that is, n+ M backup data.
Example Change the default storage space of a node
ipfs
The default storage space of a node is10个G
Method 1: Open the terminal and run the following command
export EDITOR=/usr/bin/vim
ipfs config edit
Copy the code
- Find the content in the red box below and change it to the size you want
- PS: input
i
You can start editing, and when you’re finished, pressesc
Key, and then enter:
, enter againwq
Save and exit
Method 2 Use the Web UI to modify the configuration
- Modify the information and click Save
The ipFS node goes offline, and the impact on the entire organization
- The fault-tolerant mechanism of IPFS ensures that enough data is copied and stored in different regions. Even if the data in one place is completely destroyed due to force majeure, complete data recovery can be achieved through backup in other regions, which greatly guarantees the security of data stored in IPFS
- MerkleDAG was adopted because it has the following characteristics: 1. Content addressable: All content is uniquely identified by multiple hash checksums, including links. 2. Cannot be tampered with: Everything is verified with its checksum. If the data is tampered with or corrupted, IPFS will detect it. 3. Deduplication: Duplicates the data and stores it only once. In the IPFS network, there may be duplication of data storage. The number of duplicates is related to the IPFS method used to block uploads.
- As mentioned earlier, data is stored in blocks in IPFS storage. There are many ways to split data in IPFS. This is described in the IPFS source code core/ Commands /add.go:
-
In default mode, the block size is 256 KB (256 x 1024 bytes), which corresponds to size=262144. The command does not need to add parameters, namely the ipfs add file.
-
Specify the block size mode. The command is ipfs add –chunker=size-1000. The next 1,000 could be any number less than 262,144.
-
Rabin Variable block size cutting mode. The command is ipfs add –chunker=rabin-[min]-[avg]-[Max] file. The values of min, avg and Max respectively mean the minimum block size, average block size and maximum block size. If the value is less than 262144, set it by yourself.
The chunker option, ‘-s’, specifies the chunking strategy that dictates how to break files into blocks. Blocks with same content can be deduplicated. The default is a fixed block size of 256 * 1024 bytes, ‘size-262144’. Alternatively, you can use the rabin chunker for content defined chunking by specifying rabin-[min]-[avg]-[max] (where min/avg/max refer to the resulting chunk sizes). Using other chunking strategies will produce different hashes for the same file.
ipfs add ipfs-logo.svg ipfs add –chunker=size-2048 ipfs-logo.svg ipfs add –chunker=rabin-512-1024-2048 ipfs-logo.svg
- The same file is stored in IPFS, but the hash value returned is different due to the different file cutting methods. Therefore, IPFS block storage is not duplicate, while IPFS block file patchwork data may be duplicate. In other words, the same file can be repeatedly stored in IPFS network according to different file cutting methods.
- How does backup work? If a very popular movie, everyone habitually stored the movie to their computer E disk or other hard disk storage, if 100 million people in the world stored this movie, it is a great waste of storage? In the IPFS network, the movie is stored in only one node, and a new backup is generated when a user needs to read it. The data will be copied to whoever uses it. When a node joins the IPFS network, it provides a portion of its disk space (10GB by default and configurable) for the entire network. In general, when storing files, your own portion of the disk space is always the fastest, because you don’t need to cross the network. When the storage is complete, any node on the network can access the file. When another node accesses it, that node will often copy your data into its cache space. So there are two copies across the network. Imagine that as many people are interested in the file, the number of copies on the network will increase.
- It should be noted that copies are generally cached, which means stored temporarily. After a long time, it was automatically deleted. This temporary cache is a great solution to the problem of distributed data distribution. For example, a social hot spot tends to have phases of warm up, hot down, and ebb. With IPFS, the distribution and copy number of data are perfectly matched to these phases. As more people visit, the number of copies increases, but as the heat cools, the number of copies drops, naturally balancing space utilization and access efficiency. If you want this file to be stored permanently, you must set it to a fixed style, that is, to be stored on the hard disk.
The use of ipfs
Uploading a TXT file
Upload files in other formats
- docx
- jpg
- mp4
- mp3
Matters needing attention
- The downloaded files need to be formatted, otherwise they are unavailable. This transformation can be done manually or by command.
- You can also specify the name of the file to download, plus
-o file name
Or you can add-a: compressed to a. Tar format
.-c: compressed to. Gz
ipfs get QmZJBKrLFPvn8zEatZsxSJTtJkCFm4YeMwChDLRPPPerZ6 -o 1.pdf
Copy the code
- Use the open hh. PDF command to open the PDF file. The use of open here is a built-in Linux function and has nothing to do with IPFS
docx
mp3
jpg
mp4
Upload the entire folder
- The files in the entire folder uploaded here are the same as the files used in the previous test, so their hash values are the same, which is what IPFS requires to avoid the same file being uploaded by the user more than once.
View the subfiles contained in the uploaded file
View the referenced hash
- Referenced hash concept: Generally, the folder name is referenced as many times as there are files under it. Hash is the hash of the file name applied to it
If the uploaded file is a folder, pull the folder back to the local directory. The files in the folder are stored in the normal format and do not need to be converted
Enter the hash sequence in the search box to query the file. If the file does not support preview, click Downloading to download the file
Discovered problems
- The ipFS ID is used to view the node information of the root user and the common user.
Refer to the link
- Using IPFS to complete a picture upload case
- IPFS: distributed file storage
- IPFS