Deep Dive into Node.js
120021. Deep Dive into egg.js
- Lifecycle
The Egg.js framework has a rich lifecycle, which includes the following stages:
Config Loading Stage: In this stage, Egg loads the application's configuration information, including the configuration files config/config.default.js, config/config.${env}.js, and config/config.${env}.js.
App Startup Stage: In this stage, the framework calls the beforeStart method, where users can perform some initialization work before the application starts.
Scheduled Task Startup Stage: Egg.js has a built-in scheduled task module, and users can configure scheduled tasks through configuration files. In this stage, the framework automatically starts these scheduled tasks.
Application Startup Completion Stage: In this stage, the framework calls the beforeStart method again, where users can perform some initialization work after the application starts.
Worker Process Startup Stage: In this stage, the framework calls the beforeStart method, where users can perform some initialization work before the worker process starts.
Application Shutdown Stage: When the application stops, the framework calls the beforeClose method, where users can perform some cleanup work before the application closes.
- Configuration
The configuration file for Egg.js is a JavaScript file. By default, the config folder in the project directory contains a config.default.js file, which is the default configuration file. Additionally, you can add corresponding configuration files based on the environment (such as development, testing, production), for example, config.dev.js, config.test.js, config.prod.js.
The configuration file defines some common configuration items, such as database connection information, middleware configuration, scheduled task configuration, etc. The framework loads these configurations at startup for the application to use.
Here is a simple example of an Egg.js configuration file:
// config/config.default.js
module.exports = {
// Configuration items
myConfig: 'value',
// Middleware configuration
middleware: ['middleware1', 'middleware2'],
// Database connection information
sequelize: {
dialect: 'mysql',
host: 'localhost',
port: 3306,
username: 'root',
password: '123456',
database: 'test',
},
// Scheduled task configuration
schedule: {
interval: '1m', // Executes every 1 minute
type: 'all',
},
}; In the application, you can access configuration items through app.config. For example, to get myConfig from the above configuration:
// Get configuration item
const myConfig = app.config.myConfig;
console.log(myConfig); // Outputs 'value' 2. Asynchronous Programming and Promise
- Event Mechanism
In JavaScript, the event mechanism is a way to handle asynchronous operations. It is based on the observer pattern and implements a publish-subscribe model. When an event occurs, all subscribers are notified to perform the corresponding actions.
In Node.js, the event mechanism is implemented through the events module. This module provides the EventEmitter class, which developers can extend to create custom events.
For example, here is a simple example using the event mechanism:
const EventEmitter = require('events');
// Create an instance of EventEmitter
const emitter = new EventEmitter();
// Subscribe to an event
emitter.on('customEvent', (data) => {
console.log('Event occurred with data:', data);
});
// Emit an event
emitter.emit('customEvent', { message: 'Hello, Event!' }); In this example, we created an EventEmitter instance and subscribed to an event named customEvent using the on method. When the emit method is called to emit this event, all subscribers will execute the corresponding actions.
- Promise
Promise is a solution for asynchronous programming that is more reasonable and powerful than traditional solutions—callbacks and events. It was first proposed and implemented by the community, and ES6 incorporated it into the language standard, unifying its usage and providing the Promise object natively.
A Promise object represents an asynchronous operation and can be in one of three states: pending (in progress), fulfilled (succeeded), and rejected (failed). Only the result of the asynchronous operation can determine the current state; no other operations can change this state. This is also the origin of the name Promise, which means "commitment" in English, indicating that no other means can change it. Once the state changes, it will not change again, and the result can be obtained at any time.
The state of a Promise object can only change in two ways: from pending to fulfilled and from pending to rejected. Once either of these two situations occurs, the state is solidified and will not change again, maintaining this result, at which point it is called resolved. If a change has occurred, adding a callback function to the Promise object will immediately yield this result. This is completely different from events, where if you miss it, you cannot get the result by listening again.
Here is a basic usage example of Promise:
// Create a Promise object
const promise = new Promise((resolve, reject) => {
// Asynchronous operation
setTimeout(() => {
const success = true; // Simulate whether the asynchronous operation is successful
if (success) {
resolve('Operation succeeded'); // Asynchronous operation succeeded, call resolve
} else {
reject('Operation failed'); // Asynchronous operation failed, call reject
}
}, 2000);
});
// Use the then method to handle the successful state of the Promise
promise.then((result) => {
console.log('Success:', result);
}).catch((error) => {
console.error('Error:', error);
}); In this example, the Promise constructor accepts a function as a parameter, which contains the asynchronous operation. When the asynchronous operation is complete, you can call the resolve method to indicate success or the reject method to indicate failure. Then, you can use the then method to handle the successful state and the catch method to handle the failure state.
The advantage of Promise is that it makes handling asynchronous code clearer and can avoid callback hell. It provides a way for chaining calls, making the code more readable and maintainable.
3. Streams
In Node.js, Stream is a very important concept that allows data to be divided into multiple chunks and processed gradually, rather than loading all data into memory at once. The benefits of this approach include faster processing of large amounts of data and reduced memory usage. Additionally, Node.js has many built-in modules that can handle Streams, such as the fs (file system) and http (network communication) modules.
- Application Scenarios
- Reading/Writing Large Amounts of Data
If the data volume is particularly large, loading all data into memory at once can crash the program, while Streams can split the data into smaller chunks for gradual reading or writing, allowing for efficient handling of large amounts of data.
- Network Communication
The http, https, and Net modules in Node.js all support Streams, and using Streams can improve network communication efficiency, especially when handling large amounts of data.
- File Processing
The fs module in Node.js also supports Streams, and some file operations such as file compression and file downloads can be implemented using Streams, reducing memory consumption and improving performance.
- Data Stream Processing
Streams can also be used for data stream processing, such as reading a large dataset from a database and saving it to the file system one by one.
- Usage
Streams are event-based, with data being read and written in small chunks, which are passed as events. For example, we can create a readable Stream object using the createReadStream method in the fs module, which reads data from a file and splits it into chunks:
const fs = require('fs');
const readStream = fs.createReadStream('bigfile.txt'); Next, we can bind the data event to this readable Stream object to listen for the data from the readable Stream object, as shown below:
const fs = require('fs');
const readStream = fs.createReadStream('bigfile.txt');
readStream.on('data', (chunk) => {
console.log(chunk);
}); In the above code, when the readable Stream object reads data, it triggers the data event, and this data is passed through the callback function's parameter chunk. If we want to perform simple processing on this data, we can modify the callback function as follows:
const fs = require('fs');
const readStream = fs.createReadStream('bigfile.txt');
readStream.on('data', (chunk) => {
console.log(chunk.toString());
}); In this example, we convert the Buffer object we read into a string for output. Similarly, in the write method of a writable Stream object, we can also write a small chunk of data.