Our Global Presence
Canada
57 Sherway St,
Stoney Creek, ON
L8J 0J3
India
606, Suvas Scala,
S P Ring Road, Nikol,
Ahmedabad 380049
USA
1131 Baycrest Drive,
Wesley Chapel,
FL 33544
Node.js 18.0 is released by the Node.js community. The most wonderful news is that in October 2022, this version will be elevated to long-term support (LTS). The codename for the release will be ‘Hydrogen’ once it is promoted to long-term support. Support for Node.js 18 will last until April 2025.
The most exciting news is that version 18 will finally provide native fetch functionality in Node.js. For the longest time, Node did not contain support for fetch, which is a highly standard API on the web for conducting HTTP requests or any other type of network request, and Node did not support it by default. If you wanted to make an HTTP request, you had to either use third-party tools or write the request from scratch. The implementation comes from undici and is inspired by node-fetch which was originally based upon undici-fetch. The implementation strives to be as close to spec-compliant as possible, but some aspects would require a browser environment and are thus omitted.
The API will remain experimental until further test coverage is introduced and the contributors have verified that the API implements as much of the requirements as is practicable.
Because JavaScript is utilised in so many areas, this is actually wonderful news for the entire ecosystem.It’s utilised on the web, in Node.js, and in Cloudflare workers, for example.
Cloudflare workers are currently shipping with their own proprietary implementation fetch. You’ll ought to install some few packages until you can use Node.There is a version for the web, so there is a lot of inconsistency along the route. Node is now providing formal support for this. That is, any environment that runs JavaScript on servers is almost certainly running Node. If it isn’t running Deno, it will support fetch by default, and because this is the team, the real team, doing it.
If you look at this issue closely, you can see that Node utilised or primarily ported a library called Undici. What exactly is this library? It’s officially produced by the Node team, however it’s really an HTTP 1.1 full-fledged client written entirely in Node JS.
The node:test module facilitates the creation of JavaScript tests that report results in TAP format. To access it:
import test from ‘node:test’;
This module is only available under the node: scheme. __Node Document
Node.js 18 features a test runner that is still in development.It is not meant to replace full-featured alternatives such as Jest or Mocha, but it does provide a quick and straightforward way to execute a test suite without any additional dependencies.
It provides TAP output, which is extensively used, and makes the output easier to consume.
More information may be found in the community blog post and the Node.js API docs
Note: The test runner module is only available using the node: prefix. The node: prefix denotes the loading of a core module. Omitting the prefix and importing ‘test’ would attempt to load a userland module. __Node Documents
As with other major releases, this one upgrades the minimum supported levels for systems and tooling needed to create Node.js. Node.js includes pre-built binaries for a variety of platforms. The minimum toolchains for each major release are evaluated and raised if needed.
Due to issues with creating the V8 dependencies in Node.js, prebuilt binaries for 32-bit Windows will not be accessible at first. With a future V8 upgrade, we hope to restore 32-bit Windows binaries for Node.js 18.
Supported platforms is current as of the branch/release to which it belongs
Node.js relies on V8 and libuv. We adopt a subset of their supported platforms.
There are three support tiers:
The V8 engine has been updated to version 10.1 as part of Chromium 101. The following new features are added in Node.js 17.9.0 over the previous version:
With the findLast() and findLastIndex() methods, This use case is easily and ergonomically solved.They perform identically to their find() and findIndex() equivalents, with the exception that they begin their search at the end of the Array or TypedArray.
For more information and to develop web applications using Node JS, Hire Node Developer from us as we give you a high-quality product by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”.
To develop your custom web app using Node JS, please visit our technology page.
Content Source:
This transition is intended to ease the maintenance burden on the community and challenge our development team to ship amazing, powerful new features without introducing breaking changes. Therefore, we have shipped a variety of robust features to Laravel 8 without breaking backwards compatibility, such as parallel testing support, improved Breeze starter kits, HTTP client improvements, and even new Eloquent relationship types such as “has one of many”.
Therefore, this commitment to ship great new features during the current release will likely lead to future “major” releases being primarily used for “maintenance” tasks such as upgrading upstream dependencies, which can be seen in these release notes.
Laravel 9 continues the improvements made in Laravel 8.x by introducing support for Symfony 6.0 components, Symfony Mailer, Flysystem 3.0, improved route:list output, a Laravel Scout database driver, new Eloquent accessor / mutator syntax, implicit route bindings via Enums, and a variety of other bug fixes and usability improvements.
Laravel 9.x requires a minimum PHP version of 8.0.
Laravel 9.x upgrades our upstream Flysystem dependency to Flysystem 3.x. Flysystem powers all of filesystem interactions offered by the Storage facade.
Please review the upgrade guide to learn more about ensuring your application is compatible with Flysystem 3.x.
Laravel 9.x offers a new way to define Eloquent accessors and mutators. In previous releases of Laravel, the only way to define accessors and mutators was by defining prefixed methods on your model like so:
public function getNameAttribute($value) { return strtoupper($value); } public function setNameAttribute($value) { $this->attributes['name'] = $value; }
However, in Laravel 9.x you may define an accessor and mutator using a single, non-prefixed method by type-hinting a return type of Illuminate\Database\Eloquent\Casts\Attribute:
use Illuminate\Database\Eloquent\Casts\Attribute; public function name(): Attribute { return new Attribute( get: fn ($value) => strtoupper($value), set: fn ($value) => $value, ); }
In addition, this new approach to defining accessors will cache object values that are returned by the attribute, just like custom cast classes:
use App\Support\Address; use Illuminate\Database\Eloquent\Casts\Attribute; public function address(): Attribute { return new Attribute( get: fn ($value, $attributes) => new Address( $attributes['address_line_one'], $attributes['address_line_two'], ), set: fn (Address $value) => [ 'address_line_one' => $value->lineOne, 'address_line_two' => $value->lineTwo, ], ); }
Enum casting is only available for PHP 8.1+.
Eloquent now allows you to cast your attribute values to PHP “backed” enums. To accomplish this, you may specify the attribute and enum you wish to cast in your model’s $casts property array:
use App\Enums\ServerStatus; /** * The attributes that should be cast. * * @var array */ protected $casts = [ 'status' => ServerStatus::class, ];
Once you have defined the cast on your model, the specified attribute will be automatically cast to and from an enum when you interact with the attribute:
if ($server->status == ServerStatus::provisioned) { $server->status = ServerStatus::ready; $server->save(); }
PHP 8.1 introduces support for Enums. Laravel 9.x introduces the ability to type-hint an Enum on your route definition and Laravel will only invoke the route if that route segment is a valid Enum value in the URI. Otherwise, an HTTP 404 response will be returned automatically. For example, given the following Enum:
enum Category: string { case Fruits = 'fruits'; case People = 'people'; }
You may define a route that will only be invoked if the {category} route segment is fruits or people. Otherwise, an HTTP 404 response will be returned:
Route::get('/categories/{category}', function (Category $category) { return $category->value; });
In previous releases of Laravel, you may wish to scope the second Eloquent model in a route definition such that it must be a child of the previous Eloquent model. For example, consider this route definition that retrieves a blog post by slug for a specific user:
use App\Models\Post; use App\Models\User; Route::get('/users/{user}/posts/{post:slug}', function (User $user, Post $post) { return $post; });
When using a custom keyed implicit binding as a nested route parameter, Laravel will automatically scope the query to retrieve the nested model by its parent using conventions to guess the relationship name on the parent. However, this behavior was only previously supported by Laravel when a custom key was used for the child route binding.
However, in Laravel 9.x, you may now instruct Laravel to scope “child” bindings even when a custom key is not provided. To do so, you may invoke the scopeBindings method when defining your route:
use App\Models\Post; use App\Models\User; Route::get('/users/{user}/posts/{post}', function (User $user, Post $post) { return $post; })->scopeBindings();
Or, you may instruct an entire group of route definitions to use scoped bindings:
Route::scopeBindings()->group(function () { Route::get('/users/{user}/posts/{post}', function (User $user, Post $post) { return $post; }); });
You may now use the controller method to define the common controller for all of the routes within the group. Then, when defining the routes, you only need to provide the controller method that they invoke:
use App\Http\Controllers\OrderController; Route::controller(OrderController::class)->group(function () { Route::get('/orders/{id}', 'show'); Route::post('/orders', 'store'); });
When using MySQL or PostgreSQL, the fullText method may now be added to column definitions to generate full text indexes:
$table->text('bio')->fullText();
In addition, the whereFullText and orWhereFullText methods may be used to add full text “where” clauses to a query for columns that have full text indexes. These methods will be transformed into the appropriate SQL for the underlying database system by Laravel. For example, a MATCH AGAINST clause will be generated for applications utilizing MySQL:
$users = DB::table('users') ->whereFullText('bio', 'web developer') ->get();
If your application interacts with small to medium sized databases or has a light workload, you may now use Scout’s “database” engine instead of a dedicated search service such as Algolia or MeiliSearch. The database engine will use “where like” clauses and full text indexes when filtering results from your existing database to determine the applicable search results for your query.
Sometimes you may need to transform a raw Blade template string into valid HTML. You may accomplish this using the render method provided by the Blade facade. The render method accepts the Blade template string and an optional array of data to provide to the template:
use Illuminate\Support\Facades\Blade; return Blade::render('Hello, {{ $name }}', ['name' => 'Julian Bashir']);
Similarly, the renderComponent method may be used to render a given class component by passing the component instance to the method:
use App\View\Components\HelloComponent; return Blade::renderComponent(new HelloComponent('Julian Bashir'));
In previous releases of Laravel, slot names were provided using a name attribute on the x-slot tag:
<x-alert> <x-slot name="title"> Server Error </x-slot> <strong>Whoops!</strong> Something went wrong! </x-alert>
However, beginning in Laravel 9.x, you may specify the slot’s name using a convenient, shorter syntax:
<x-slot:title> Server Error </x-slot>
For convenience, you may now use the @checked directive to easily indicate if a given HTML checkbox input is “checked”. This directive will echo checked if the provided condition evaluates to true:
<input type="checkbox" name="active" value="active" @checked(old('active', $user->active)) />
Likewise, the @selected directive may be used to indicate if a given select option should be “selected”:
<select name="version"> @foreach ($product->versions as $version) <option value="{{ $version }}" @selected(old('version') == $version)> {{ $version }} </option> @endforeach </select>
Laravel now includes pagination views built using Bootstrap 5. To use these views instead of the default Tailwind views, you may call the paginator’s useBootstrapFive method within the boot method of your App\Providers\AppServiceProvider class:
use Illuminate\Pagination\Paginator; /** * Bootstrap any application services. * * @return void */ public function boot() { Paginator::useBootstrapFive(); }
For more information and to develop web applications using Laravel, Hire Laravel Developer from us as we give you a high-quality product by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”.
To develop custom web apps using Laravel, please visit our technology page.
Content Source:
If you’re building a minimalist product, but want to add an extra level of customisation or expansion, then plugins are a great way to enable that. They allow each instance to be customised with additional code that anyone can create.
In this tutorial, we will extend a minimal web server with a plugin system, create a plugin, and use that plugin with the server. We’ve already created a very basic web server. We’ll use this as our starter code.
But before we start, what is a plugin? A plugin is an extra piece of code that expands what a program can normally do. Web extensions are a good example of this. They leverage existing APIs, and build new functionality upon them. In certain circumstances, they can also be used to patch bugs and flaws that the main software provider has not yet addressed.
In this tutorial, we’ll be building a request counting plugin, and the system to load and unload that plugin.
Let’s start by creating a fresh Node.js project. I’ll be using NPM in this tutorial, but I’ll add the Yarn commands too.
mkdir my-plugin-app
cd my-plugin-app
npm init -y (yarn init -y)
npm i express (yarn add express)
We won’t go through the process of creating an app for our plugins system. Instead, we’ll use this starter code. A simple server with one endpoint. Create an index.js file and add this starter code.
const express = require('express'); const EventEmitter = require('events'); class App extends EventEmitter { constructor() { super(); this.server = express(); this.server.use(express.json()); } start() { this.server.get('/', (req, res) => { res.send('Hello World!'); }); this.server.listen(8080, () => { console.log('Server started on port 3000') this.emit('start'); }); } stop() { if (this.stopped) return; console.log('Server stopped'); this.emit('stop'); this.stopped = true; process.exit(); } } const app = new App(); app.start(); ["exit", "SIGINT", "SIGUSR1", "SIGUSR2", "SIGTERM", "uncaughtException"].forEach(event => { process.on(event, () => app.stop()); });
To get a sense as to what our plugin system will do, let’s create a little plugin containing everything a plugin will need. The type of plugin system we’ll be implementing has two events. load and unload are the two times we will directly call any code in the plugin. load sets up any extra routes, middleware or anything else that is part of the plugin, and unload tells us to safely stop whatever we’re doing and save any persistent data.
This plugin will set up a middleware to count the number of requests we get. In addition, it will add an API route so we can query the number of requests that have been made so far. The plugin will export two different functions, one for each event.
const fs = require('fs'); let count = 0; function load(app) { try { count = +fs.readFileSync('./counter.txt'); } catch (e) { console.log('counter.txt not found. Starting from 0'); } app.server.use((req, res, next) => { count++; next(); }); app.server.get('/count', (req, res) => { res.send({ count }); }) } // Save request count for next time function unload(app) { fs.writeFileSync('./counter.txt', count); } module.exports = { load, unload };
Our plugin system will all be kept in a separate class from the main app, we’ll put this in a new file plugins.js. The point of this class is to load and unload plugins.
The load function takes a path to a plugin file, and uses the require method to load it during runtime. The loadFromConfig method allows us to load plugins defined in a config file.
const fs = require("fs"); class Plugins { constructor(app) { super(); this.app = app; this.plugins = {}; } async loadFromConfig(path='./plugins.json') { const plugins = JSON.parse(fs.readFileSync(path)).plugins; for (let plugin in plugins) { if (plugins[plugin].enabled) { this.load(plugin); } } } async load(plugin) { const path = plugins[plugin]; try { const module = require(path); this.plugins[plugin] = module; await this.plugins[plugin].load(this.app); console.log(`Loaded plugin: '${plugin}'`); } catch (e) { console.log(`Failed to load '${plugin}'`) this.app.stop(); } } } module.exports = Plugins;
We’ll use a plugins.json file to store the paths to all the plugins we wish to load, then call the loadFromConfig method to load them all at once. Put the plugins.json file in the same directory as your code.
{ "counter": "./counter.js" }
Finally, we’ll create an instance of the plugin in our app. Import the Plugins class, create an instance in the constructor, and call loadFromConfig leaving the path blank (the default is ./plugins.json).
const express = require('express'); const Plugins = require('./plugins'); class App { constructor() { super(); this.plugins = new Plugins(this); this.server = express(); this.server.use(express.json()); } async start() { await this.plugins.load(); this.server.get('/', (req, res) => { res.send('Hello World!'); }); this.server.listen(8080, () => { console.log('Server started on port 3000') }); } stop() { if (this.stopped) return; console.log('Server stopped'); this.stopped = true; process.exit(); } } const app = new App(); app.start(); ["exit", "SIGINT", "SIGUSR1", "SIGUSR2", "SIGTERM", "uncaughtException"].forEach(event => { process.on(event, () => app.stop()); });
We now need to handle the unload method exported from our plugin. And once we do, we need to remove it from the plugins collection. We’ll also include a stop method which will unload all plugins. We’ll use this method later to enable safe shutdowns.
const fs = require("fs"); class Plugins { constructor(app) { super(); this.app = app; this.plugins = {}; } async loadFromConfig(path='./plugins.json') { const plugins = JSON.parse(fs.readFileSync(path)).plugins; for (let plugin in plugins) { if (plugins[plugin].enabled) { this.load(plugin); } } } async load(plugin) { const path = plugins[plugin]; try { const module = require(path); this.plugins[plugin] = module; await this.plugins[plugin].load(this.app); console.log(`Loaded plugin: '${plugin}'`); } catch (e) { console.log(`Failed to load '${plugin}'`) this.app.stop(); } } unload(plugin) { if (this.plugins[plugin]) { this.plugins[plugin].unload(); delete this.plugins[plugin]; console.log(`Unloaded plugin: '${plugin}'`); } } stop() { for (let plugin in this.plugins) { this.unload(plugin); } } } module.exports = Plugins;
To make sure that plugins get a chance to unload when the app closes, we need to call Plugins.stop. In the index.js code, we included a stop method that gets called when the app is killed, and we’ve just added a stop method to the Plugins class. So let’s call the Plugins.stop method when our app stop method is called.
Add the following to the App.stop method.
stop() { if (this.stopped) return; + this.plugins.stop(); console.log('Server stopped'); this.stopped = true; process.exit(); }
For more information and to develop web applications using Node JS, Hire Node Developer from us as we give you a high-quality product by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”.
To develop your custom web app using Node JS, please visit our technology page.
Content Source:
Laravel Forge launched their first official command-line tool that gives you a nice set of commands to manage your Forge servers, sites, and more.
The first release (v1.0) of the Forge CLI contains around thirty commands, including initiating deployments, viewing application logs, configuring SSH key authentication, and more.
You may install the Forge CLI as a global Composer dependency:
composer global require laravel/forge-cli
After it’s installed you can forge from your terminal and see usage:
To view a list of all available Forge CLI commands and view the current version of your installation, you may run the forge
command from the command-line:
forge
You will need to generate an API token to interact with the Forge CLI. Tokens are used to authenticate your account without providing personal details. API tokens can be created from Forge’s API dashboard (opens new window).
After you have generated an API token, you should authenticate with your Forge account using the login command:
forge login
When managing Forge servers, sites, and resources via the CLI, you will need to be aware of your currently active server. You may view your current server using the server:current
command. Typically, most of the commands you execute using the Forge CLI will be executed against the active server.
forge server:current
Of course, you may switch your active server at any time. To change your active server, use the server:switch
command:
forge server:switch forge server:switch staging
To view a list of all available servers, you may use the server:list
command:
forge server:list
Before performing any tasks using the Forge CLI, you should ensure that you have added an SSH key for the forge
user to your servers so that you can securely connect to them. You may have already done this via the Forge UI. You may test that SSH is configured correctly by running the ssh:test
command:
forge ssh:test
To configure SSH key authentication, you may use the ssh:configure
command. The ssh:configure
command accepts a --key
option which instructs the CLI which public key to add to the server. In addition, you may provide a --name
option to specify the name that should be assigned to the key:
forge ssh:configure forge ssh:configure --key=/path/to/public/key.pub --name=sallys-macbook
After you have configured SSH key authentication, you may use the ssh
command to create a secure connection to your server:
forge ssh forge ssh server-name
To view the list of all available sites, you may use the site:list
command:
forge site:list
One of the primary features of Laravel Forge is deployments. Deployments may be initiated via the Forge CLI using the deploy
command:
forge deploy forge deploy example.com
You may update a site’s environment variables using the env:pull
and env:push
commands. The env:pull
command may be used to pull down an environment file for a given site:
forge env:pull forge env:pull pestphp.com forge env:pull pestphp.com .env
Once this command has been executed the site’s environment file will be placed in your current directory. To update the site’s environment variables, simply open and edit this file. When you are done editing the variables, use the env:push
command to push the variables back to your site:
forge env:push forge env:push pestphp.com forge env:push pestphp.com .env
If your site is utilizing Laravel’s “configuration caching” feature or has queue workers, the new variables will not be utilized until the site is deployed again.
You may also view a site’s logs directly from the command-line. To do so, use the site:logs
command:
forge site:logs forge site:logs --follow # View logs in realtime forge site:logs example.com forge site:logs example.com --follow # View logs in realtime
When a deployment fails, you may review the output / logs via the Forge UI’s deployment history screen. You may also review the output at any time on the command-line using the deploy:logs
command. If the deploy:logs
command is called with no additional arguments, the logs for the latest deployment will be displayed. Or, you may pass the deployment ID to the deploy:logs
command to display the logs for a particular deployment:
forge deploy:logs forge deploy:logs 12345
Sometimes you may wish to run an arbitrary shell command against a site. The command
command will prompt you for the command you would like to run. The command will be run relative to the site’s root directory.
forge command forge command example.com forge command example.com --command="php artisan inspire"
As you may know, all Laravel applications include “Tinker” by default. To enter a Tinker environment on a remote server using the Forge CLI, run the tinker
command:
forge tinker forge tinker example.com
Forge provisions servers with a variety of resources and additional software, such as Nginx, MySQL, etc. You may use the Forge CLI to perform common actions on those resources.
To check the current status of a resource, you may use the {resource}:status
command:
forge daemon:status forge database:status forge nginx:status forge php:status # View PHP logs (default PHP version) forge php:status 8.0 # View PHP 8.0 logs
You may also view logs directly from the command-line. To do so, use the {resource}:logs
command:
forge daemon:logs forge daemon:logs --follow # View logs in realtime forge database:logs forge nginx:logs # View error logs forge nginx:logs access # View access logs forge php:logs # View PHP logs (default PHP version) forge php:logs 8.0 # View PHP 8.0 logs
Resources may be restarted using the {resource}:restart
command:
forge daemon:restart forge database:restart forge nginx:restart forge php:restart # Restarts PHP (default PHP version) forge php:restart 8.0 # Restarts PHP 8.0
You may use the {resource}:shell
command to quickly access a command line shell that lets you interact with a given resource:
forge database:shell forge database:shell my-database-name forge database:shell my-database-name --user=my-user
For more information and to develop web applications using Laravel, Hire Laravel Developer from us as we give you a high-quality product by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”.
To develop custom web apps using Laravel, please visit our technology page.
Content Source:
Frontend development is very important these days. There are a lot of tasks that a frontend developer has to do on a daily basis.
As front-end developers, we write a lot of HTML, CSS, and JavaScript code all the time. Knowing some coding tips could be very beneficial for us. That’s why in this article, I decided to share with you some frontend coding tips that you probably don’t know.
Do you know that you can hide an HTML element without using JavaScript?
By using the attribute hidden, you can easily hide an HTML element natively. As a result, that element won’t be displayed on the web page.
Here is the code example:
<p hidden>This paragraph won't show up. It's hidden by HTML.</p>
It’s always a good practice to use shorthands in order to make your CSS code smaller. The property inset in CSS is a useful shorthand for the four properties top , left , right , and bottom .
If these four properties have the same value, you can just use the property inset instead and your code will look much cleaner.
Here is an example:
Bad practice:
div{ position: absolute; top: 0; left: 0; bottom: 0; right: 0; }
Good practice:
div{ position: absolute; inset: 0; }
You can easily detect internet speed in JavaScript by using the navigator object. It’s very simple.
Here is an example:
navigator.connection.downlink;
As you can see above, it gives me 5.65 since my internet speed is not good at all.
Again, you can easily use the method vibrate()in the navigator object to vibrate your phone.
Here is an example:
//vibrating the device for 500 milliseconds window.navigator.vibrate(500);
So as you can see, the device in this situation will vibrate for 500 milliseconds.
By using only CSS, you can disable pull to refresh on mobile. The property overscroll-behavior-y allows us to do that. Just give it the value contain .
Here is the example:
body{ overscroll-behavior-y: contain; }
There will be situations where you will need to prevent the user from pasting text into inputs.
Well, you can easily do that in JavaScript using a paste event listener.
Here is an example:
<input type="text"></input> <script> //selecting the input. const input = document.querySelector('input'); //prevent the user to paste text by using the paste eventListener. input.addEventListener("paste", function(e){ e.preventDefault() }) </script>
As a result, now users can’t paste the copied text into the input field.
Databases are the backbone of our applications, and the more you learn about how they work, the better you will be at using them, writing applications against them, and troubleshooting problems when things inevitably go wrong.
So let’s dive into seven things you should (probably) know about databases.
We’ve seen a lot of dogmatic fist-banging about “the best” or “the worst” database, but the truth is the best database is the one that works best for your application. There’s no one-size-fits-all sort of database just like there’s no one-size-fits-all programming language or operating system.
When starting a new project, choosing the right database can be one of the most crucial decisions that you’ll make. So how should you choose which DB to use? We put together a list of five things to consider in our article on databases for developers, but let us also quickly go through them here.
What kind of data will be stored in the database?
Are you storing log files or user accounts?
How complex is the data that will be stored?
Can the data be normalized easily?
How uniform is the data?
Does your data roughly follow the same schema or is it disparate or heavily nested?
How often will it need to be read or written?
Is your application read- or write-heavy, or both?
Are there environmental or business considerations?
Do we have existing agreements with vendors? Do I need vendor support?
By answering these questions, you can help narrow down your choices to a few candidates. Once there, testing should tell you which one is the best for your application.
Sometimes you don’t have a choice and the database is already chosen for you. Whether you came to the project after it was started or political winds forced you a certain way, using the wrong database for the job can be frustrating.
But equally, if not more, frustrating is the progress of migrating databases should you get the opportunity. Once you start down one path, it’s not easy to simply change paths in the middle of things. Not only do you have to figure out a way to replicate your data from one database to another and learn a whole new system, but depending on how tightly coupled your database code is with the rest of your application, you might also be looking at extensive rewrites as well. Changing databases is not a task that should be undertaken lightly and without a lot of consideration, debate, testing, and planning. There are so many ways that things can go horribly wrong. This is why #2 is so important: Once you choose, it’s hard to undo that choice.
The debate about using a SQL or NoSQL database will go on forever. We get that. But often missed in this argument is the fact that NoSQL databases don’t replace SQL databases. They complement them.
There are some things that NoSQL databases do very well and there are some things that SQL databases do very well. Prometheus is very good at storing time-series data like metrics, but you wouldn’t want to use MySQL for that. Is it technically possible? Yes, but it’s not designed for that and you’re not going to get the best performance or developer experience out of it. On the flip side, you wouldn’t want to use Redis to store highly relational data like user accounts or financial transactions for the same reasons. Sure, you could make it work in the code, but why add that complexity and headache when you could just use the right tool for the job?
There is going to be some inevitable overlap in some areas. There are some excellent databases that are technically NoSQL that do a good job of storing relational data (see: Couchbase), but there are other outside factors that go into using one over the other. Factors like client language support, operational tooling, cloud support, and others are all things to take into account when choosing a database.
For more information and to develop web applications using modern front-end technology, Hire Front-End Developer from us as we give you a high-quality solution by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft“.
To develop your custom web app using JavaScript, please visit our technology page.
Content Source:
In the next release of Laravel 8, you can strictly disable lazy loading entirely, resulting in an exception:
Preventing lazy loading in development can help you catch N+1 bugs earlier on in the development process. The Laravel ecosystem has various tools to identify N+1 queries. However, this approach brings the issue front-and-center by throwing an exception.
Let’s walk through this feature real quick by spinning up a development version of the framework 8.x branch since this feature is not out yet at the time of writing. Once released, you will have this feature without switching to the latest 8.x branch.
First, create a new application:
laravel new strict-lazy-demo
Next, we’ll update the laravel/framework version in composer.json to make sure we have this feature (if you’re trying it out before the next release) by adjusting the version to 8.x-dev:
{ "require": { "laravel/framework": "8.x-dev" } }
Next, run composer update to make sure you get the latest version of the code for this branch:
composer update laravel/framework
At this point, you should set up your preferred database. We like running a local MySQL instance using Laravel’s defaults of using the root user without a password. We find it convenient to use the default .env values locally to get started quickly without any configuration.
mysql -uroot -e"create database strict_lazy_demo"
Once you configure your database of choice, make sure you can migrate:
php artisan migrate:fresh
We’ll create a Post model and define a one-to-many relationship from the User model to demonstrate this feature. We’ll start by creating the Post model and accompanying files:
# Create a model with migration and factory php artisan make:model -mf Post
First, let’s define our Post migration and factory configuration:
// Your filename will differ based on when you create the file. // 2021_05_21_000013_create_posts_table.php Schema::create('posts', function (Blueprint $table) { $table->id(); $table->foreignIdFor(\App\Models\User::class); $table->string('title'); $table->longText('body'); $table->timestamps(); });
Next, define your PostFactory definition method based on the above schema:
/** * Define the model's default state. * * @return array */ public function definition() { return [ 'user_id' => \App\Models\User::factory(), 'title' => $this->faker->sentence(), 'body' => implode("\n\n", $this->faker->paragraphs(rand(2,5))), ]; }
Finally, open up the DatabaseSeeder file and add the following in the run() method:
/** * Seed the application's database. * * @return void */ public function run() { \App\Models\User::factory() ->has(\App\Models\Post::factory()->count(3)) ->create() ; }
Now that we have the migration, seeder, and model created, we are ready to associate a User with the Post model to demo this feature.
Add the following method to the User model to give the user an association with Posts:
// app/Models/User.php /** * @return \Illuminate\Database\Eloquent\Relations\HasMany */ public function posts() { return $this->hasMany(Post::class); }
With that in place, we can migrate and seed the database:
php artisan migrate:fresh --seed
If all went well, we should see something like the following in the console:
We can now using tinker to inspect our seeded data and relationship:
php artisan tinker >>> $user = User::first() => App\Models\User {#4091 id: 1, name: "Nedra Hayes", email: "bruen.marc@example.com", email_verified_at: "2021-05-21 00:35:59", created_at: "2021-05-21 00:35:59", updated_at: "2021-05-21 00:35:59", } >>> $user->posts => Illuminate\Database\Eloquent\Collection {#3686 all: [ App\Models\Post {#3369 id: 1, ...
The $user->posts
property actually calls the database, thus is “lazy” but is not optimized. The convenience of lazy-loading is nice, but it can come with heavy performance burdens in the long-term.
Now that we have the models set up, we can disable lazy loading across our application. You’d likely want to only disable in non-production environments, which is easy to achieve! Open up the AppServiceProvider class and add the following to the boot() method:
// app/Providers/AppServiceProvider.php public function boot() { Model::preventLazyLoading(! app()->isProduction()); }
If you run a php artisan tinker session again, this time you should get an exception for a lazy loading violation:
php artisan tinker >>> $user = \App\Models\User::first() => App\Models\User {#3685 id: 1, name: "Nedra Hayes", email: "bruen.marc@example.com", email_verified_at: "2021-05-21 00:35:59", #password: "$2y$10$92IXUNpkjO0rOQ5byMi.Ye4oKoEa3Ro9llC/.og/at2.uheWG/igi", #remember_token: "jHSxFGKOdw", created_at: "2021-05-21 00:35:59", updated_at: "2021-05-21 00:35:59", } >>> $user->posts Illuminate\Database\LazyLoadingViolationException with message 'Attempted to lazy load [posts] on model [App\Models\User] but lazy loading is disabled.'
If you want to visualize what happens if you use lazy loading in a view file, modify the default route as follows:
Route::get('/', function () { return view('welcome', [ 'user' => \App\Models\User::first() ]); });
Next, add the following somewhere in the welcome.blade.php file:
<h2>Posts</h2> @foreach($user->posts as $post) <h3>{{ $post->title }}</h3> <p> {{ $post->body }} </p> @endforeach
If you load up your application through Valet or artisan serve, you should see something like the following error page:
Though you’ll get exceptions during development, accidentally deploying code that triggers lazy-loading will continue to work as long as you set environment checking correctly in the service provider.
For more information and to develop web applications using Laravel, Hire Laravel Developer from us as we give you a high-quality product by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”.
To develop custom web apps using Laravel, please visit our technology page.
Content Source:
Application Performance Monitoring (APM), as the name suggests, is the process of monitoring the performance of the many aspects of your application.
When an end-user logs into your application, for even just one web page to load on their device, there are very many backstage components that need to come together and operate in synchrony to ensure a smooth and fast experience. These include network components (that carry the bytes of data), software components (e.g., server-side frameworks, front-end code, and other dependencies), and hardware components (i.e., CPU processors, memory, and storage of machines that host your web servers, APIs, databases, file systems, etc.) It can become overwhelming to manually keep track of your application performance on all these different levels and across all components. This is even truer when you ideally want monitoring and checks to happen all the time, in real-time!
Well, this is precisely the problem that APM solutions target. APM tools, like ScoutAPM, allow organizations to get a detailed analysis of the performance of your applications, in real-time. This includes critical information about server requests, response times, time-consuming methods and end-points, errors and their root cause analysis, and lots more – presented in a way that is easy to understand and troubleshoot.
These performance insights provide a lot of valuable information about optimizing resource allocations and effective cost reductions while surfacing other issues that could potentially fail your application – and all before the user gets a hint of anything being amiss.
Apart from presenting a bird’s eye view of what is happening within your application as a whole, APM tools provide you with your application’s score on particular metrics that quantify its performance along different grounds.
They provide metrics like request rates, response times, server load, CPU and memory usage, application throughput, server health status, and lots more, enabling organizations to understand what drives their application’s performance or failures.
They bring to light and help you identify performance bottlenecks, memory leaks, bloat, slow database queries, wasted execution cycles, and much more in your application. Additionally, tools like ScoutAPM enable teams to trace the cause of these issues to the specific line of the code causing them so that developers need to spend less time debugging and more time building.
Different platforms, frameworks, and APIs allow you to monitor the performance of a few of your applications’ components – for example, your cloud service provider could provide information about resource usage, logging frameworks could help you capture backend errors and processing times, etc. But wouldn’t it be much more useful to have everything you need under one roof – as a one-stop platform to provide all the information about everything you might need to know about your application’s performance.
Different organizations might want to optimize their application’s performance on different metrics. Some teams might want to prioritize more reliability and uptime, over other applications that might want to focus on higher speeds and lower response times. In this regard, equally important is the amount of flexibility that many of these tools offer in creating customizable dashboards – allowing you to focus on aspects of performance that matter the most to your application.
APM tools, therefore, can go a long way in resolving issues faster, preventing interruptions, boosting performance, increasing business and revenue, and understanding customer interactions.
Let us look at some common use cases of APM solutions to get a pragmatic understanding of how helpful they can be for developers and organizations to ensure that everything about their application is on track.
Application development involves a lot of code tweaking, solving bugs, adding features, experimenting with different libraries and frameworks, refactoring, and so on. This can lead to minor fluctuations in performance that developers might want to track and monitor throughout the development lifecycle and in the staging and production environments.
Therefore, application development can benefit a great deal from the insights provided by APM tools. These could be insights about the application’s performance or an in-depth analysis of issues down to the code level. By highlighting the source of the problem and isolating issues to specific lines (or methods) in the code causing them, these tools narrow down the areas of the project that they should be focusing more on.
Below is an example of code traceability in ScoutAPM, with Github integration enabled.
A bottleneck in software engineering refers to the negative effect on performance caused by the limited ability or capacity of one component of the system – similar to impeding water flow caused near a bottle’s constricted neck. A bottleneck is like the slower car on a single-track road that keeps everyone else waiting.
Even with the best software and hardware infrastructure in place, all it takes is one sub-optimal component to make your application crawl when it could be flying. APM tools help you identify performance bottlenecks with accuracy. These range from bottlenecks in disk usage, CPU utilization, memory to software and network components. APM platforms like Scout provide a complete analysis of several metrics like the memory allocation, response times, throughput, and error rates corresponding to each end-point in your application. Metrics like these provide insights into the long-term performance of these applications and help highlight where such bottlenecks lie.
It is important to note that if you are just starting out with web development, and working on smaller, personal projects, understanding the importance of APM tools might not come easily or seem super relevant. However, these tools become exponentially more valuable as your application(s) scale-up and cater to hundreds or thousands of users.
For more information and to develop web applications using Laravel, Hire Laravel Developer from us as we give you a high-quality product by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”.
To develop custom web apps using Laravel, please visit our technology page.
Content Source:
We are excited to announce the release of Node.js 16 today! Highlights include the update of the V8 JavaScript engine to 9.0, prebuilt Apple Silicon binaries, and additional stable APIs.
You can download the latest release from https://nodejs.org/en/download/current/, or use Node Version Manager on UNIX to install with nvm install 16
. The Node.js blog post containing the changelog is available at https://nodejs.org/en/blog/release/v16.0.0.
Initially, Node.js 16 will replace Node.js 15 as our ‘Current’ release line. As per the release schedule, Node.js 16 will be the ‘Current’ release for the next 6 months and then promoted to Long-term Support (LTS) in October 2021. Once promoted to long-term support the release will be designated the codename ‘Gallium’.
As a reminder – Node.js 12 will remain in long-term support until April 2022, and Node.js 14 will remain in long-term support until April 2023. Node.js 10 will go End-of-Life at the end of this month (April 2021). More details on our release plan/schedule can be found in the Node.js Release Working Group repository.
As always a new version of the V8 JavaScript engine brings performance tweaks and improvements as well as keeping Node.js up to date with JavaScript language features. In Node.js v16.0.0, the V8 engine is updated to V8 9.0 — up from V8 8.6 in Node.js 15.
This update brings the ECMAScript RegExp Match Indices, which provide the start and end indices of the captured string. The indices array is available via the .indices
property on match objects when the regular expression has the /d
flag.
> const matchObj = /(Java)(Script)/d.exec('JavaScript'); undefined > matchObj.indices [ [ 0, 10 ], [ 0, 4 ], [ 4, 10 ], groups: undefined ] > matchObj.indices[0]; // Match [ 0, 10 ] > matchObj.indices[1]; // First capture group [ 0, 4 ] > matchObj.indices[2]; // Second capture group [ 4, 10 ]
The Timers Promises API provides an alternative set of timer functions that return Promise objects, removing the need to use util.promisify()
.
import { setTimeout } from 'timers/promises'; async function run() { await setTimeout(5000); console.log('Hello, World!'); } run();
Added in Node.js v15.0.0 by James Snell (https://github.com/nodejs/node/pull/33950), in this release, they graduate from experimental status to stable.
The nature of our release process means that new features are released in the ‘Current’ release line approximately every two weeks. For this reason, many recent additions have already been made available in the most recent Node.js 15 releases, but are still relatively new to the runtime.
Some of the recently released features in Node.js 15, which will also be available in Node.js 16, include:
AbortController
implementation based on the AbortController Web API(buffer.atob(data))
and btoa (buffer.btoa(data))
implementations for compatibility with legacy web platform APIsNode.js provides pre-built binaries for several different platforms. For each major release, the minimum toolchains are assessed and raised where appropriate.
Node.js v16.0.0 will be the first release where we ship prebuilt binaries for Apple Silicon. While we’ll be providing separate tarballs for the Intel (darwin-x64)
and ARM (darwin-arm64)
architectures the macOS installer (.pkg)
will be shipped as a ‘fat’ (multi-architecture) binary.
The production of these binaries was made possible thanks to the generosity of MacStadium donating the necessary hardware to the project.
On our Linux-based platforms, the minimum GCC level for building Node.js 16 will be GCC 8.3. Details about the supported toolchains and compilers are documented in the Node.js BUILDING.md file.
As a new major release, it’s also the time where we introduce new runtime deprecations. The Node.js project aims to minimize the disruption to the ecosystem for any breaking changes. The project uses a tool named CITGM (Canary in the Goldmine), to test the impact of any breaking changes (including deprecations) on a large number of the popular ecosystem modules to provide additional insight before landing these changes.
Notable deprecations in Node.js 16 include the runtime deprecation of access to process.binding()
for a number of the core modules, such as process.binding('http_parser')
.
For more information and to develop web applications using Node JS, Hire Node Developer from us as we give you a high-quality product by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”. To develop custom web apps using Node JS, please visit our Hire Node Developer technology page.
Content Source:
In this blog, we’ll make a comparative analysis of Golang vs. Node.js for backend web development.
Now, we want to understand whether the switch from a traditional Node.js to the popular Golang is sensible or not. That’s why we would like to compare the two solutions to help you make the best choice.
Even though Golang was only launched in 2009, it can still be regarded as quite mature and robust.
However, there can be no comparison when Node.js comes into play. It has a broader audience which supports the platform, even though the API is changing somewhat.
Being an interpreted language, which is based on JavaScript, Node.js turns out to be a bit slower than other compiled languages. Node.js is not able to provide the raw performance of CPU or memory-bound tasks that Go does. This is because it’s based on C and C++, which are initially good in terms of performance.
However, when it comes to real life, both show almost equal results.
Node.js is single-threaded and uses an event-callback mechanism. This is what makes Node.js much weaker than Go. It uses co-routines (called “goroutines”) and a lightweight thread, communication among which is elegant and seamless due to channels.
Node.js is much weaker in terms of parallel processes for big projects compared to Golang, which was specifically designed to overcome possible issues in this area. Golang has the advantage due to goroutines that enable multiple threads to be performed concurrently, with parallel tasks executed simply and safely.
Front-End and Back-End
You should keep in mind that Golang is perfect for server-side applications, while Node.js is unrivaled when it comes to client-side development. Therefore, Go is an ideal decision if you want to create high-performing concurrent services on the back-end. And Node.js is your choice for the front-end.
For a long time, Golang was regarded as having a very small community because it was young and not widely implemented. Now, the situation has changed. Despite the fact that Go still fails to keep pace with Node.js support, the language boasts numerous packages (more than 100), and the number keeps growing.With JavaScript, you’ll have no difficulty in finding the right tool or package for your project; today, there are more than 100,000. Hundreds of libraries, various tutorials, and multiple platforms are at your disposal.
According to the 2017 Developer Survey by StackOverflow, JavaScript continues to occupy the leading position, being chosen by 61.2% of developers. Go showed a slightly worse result 4.3%. However, this means Go is already among the most promising languages of 2018, based even on simple Google search.
Currently, it’s still much easier to find a competent team of Node.js developers than put together one of Golang specialists. However, you can always take the IT outsourcing route and reach out to a reputable team with a strong portfolio of Go work.
When you deal with errors while using Go, you have to implement explicit error checking. This can make the process of finding the causes of errors difficult. Yet numerous developers argue that such an approach provides a cleaner application in general.
The Node.js approach with a throw/catch mechanism is more traditional and is preferred by many developers, although there are some problems with consistency at the end.
JavaScript is one of the most common coding languages nowadays. If you’re familiar with it, it will be no big deal to adapt to using Node.js programming. If you’re a newbie in JavaScript, you can leverage JavaScript’s vast community, which is always ready to share its expertise or give advice.
With Golang, you have to be ready to learn a new language, including co-routines, strict typing, pointers, and other programming concepts that may confuse you at first.
The latest trend of 2017 is blockchain technology. Many projects nowadays trumpet their blockchain-based application at every opportunity. And for good reason! The technology provides reliability, full control for the user, high-quality data, longevity, process integrity, transparency, and one more pack of buzzwords that define the viability of many startups today.
Theoretically, it’s possible to implement Node.js for developing a blockchain. However, building a blockchain in Go is a much easier solution and we highly recommend it.
In its essence, a blockchain is a distributed database of records. Go implies the implementation of an array and a map. The array keeps ordered hashes, and the map would keep hash -> blockpairs (maps are unordered). Then, we add blocks, and that’s it!
So, what should you choose: Node.js or Golang? The answer to this question depends on which type of development you need at the moment and how much you are going to scale the project.
For sure, Node.js has a broader community and a comprehensive documentation, yet, Go has a syntactically cleaner concurrency model, and it is better suited for scaling up.
Node.js, in its turn, can offer you a variety of packages, most of which are hard to re-implement in Go. In these case, it would be wiser to use Node.js.
If you feel overwhelmed by all this information or simply need some extra hands with Golang or Node.js expertise, then write a comment to initialise a conversation with other developers here.
For more information and to develop web application using Node JS, Hire Node Developer from us as we give you a high-quality product by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”. To develop custom web apps using Node JS, please visit our Hire Node Developer technology page.
Content Source:
In this article, we’re going to discuss about Node.js 15 new features, including throw on unhandled rejections and V8 8.6 language features.
Node.js 15 was released recently. It comes with a number of major features:
Let’s explore what they are and how to use them.
In a previous article, we provided instructions on using NVM (Node Version Manager) to manage Node.js and NPM versions. In our environment, we had Node.js 12.16.0 and NPM 6.14.8 installed. By running nvm install node, we installed Node.js 15.4.0 and NPM7.0.15.
We have two windows open, one is set to Node.js 12, and the other one is set to Node.js 15.
On the node12 window:
$ nvm use 12 Now using node v12.16.0 (npm v6.14.8)
On the node15 window:
$ nvm use 15 Now using node v15.4.0 (npm v7.0.15)
Now we’re ready to explore.
The unhandledRejection event is emitted whenever a promise is rejected and no error handler is attached to the promise within a turn of the event loop. Starting from Node.js 15, the default mode for unhandledRejection has been changed to throw from warn. In throw mode, if an unhandledRejection hook is not set, the unhandledRejection is raised as an uncaught exception.
Create a program so that a promise is rejected with an error message:
function myPromise() { new Promise((_, reject) => setTimeout( () => reject({ error: 'The call is rejected with an error', }), 1000 ) ).then((data) => console.log(data.data)); } myPromise();
When you run this code on node12 window it shows a long warning message:
$ node myPromise.js (node:79104) UnhandledPromiseRejectionWarning: #<Object> (node:79104) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1) (node:79104) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.Users that have an unhandledRejection hook should see no change in behavior, and it’s still possible to switch modes using the --unhandled-rejections=mode process flag.
Run this code on the node15 window and it throws the error, UnhandledPromiseRejection
:
$ node myPromise.js node:internal/process/promises:227 triggerUncaughtException(err, true /* fromPromise */); ^ [UnhandledPromiseRejection: This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). The promise rejected with the reason "#<Object>".] { code: 'ERR_UNHANDLED_REJECTION' }
Add an error handler in the then clause in the below code (.catch((error) => console.log(error.error)) works too).
function myPromise() { new Promise((_, reject) => setTimeout( () => reject({ error: 'The call is rejected with an error', }), 1000 ) ).then( (data) => console.log(data.data), (error) => console.log(error.error) ); } myPromise();
Now, the code runs correctly on both node12 and node15 windows:
$ node myPromise.js The call is rejected with an error
It’s best practice to write an error handler for promises. However, there will be cases where errors are not caught. It’s a good idea to set up the unhandledRejection hook to catch potential errors.
function myPromise() { new Promise((_, reject) => setTimeout( () => reject({ error: 'The call is rejected with an error', }), 1000 ) ).then((data) => console.log(data.data)); } myPromise(); process.on('unhandledRejection', (reason, promise) => { console.log('reason is', reason); console.log('promise is', promise); // Application specific logging, throwing an error, or other logic here });
The unhandledRejection hook works for both Node.js 12 and Node.js 15. With that set up, unhandledRejection can be handled properly.
$ node myPromise.js reason is { error: 'The call is rejected with an error' } promise is Promise { <rejected> { error: 'The call is rejected with an error' } }
The V8 JavaScript engine has been updated from 8.4 to 8.6. Along with performance tweaks and improvements, the V8 update also brings the following language features:
First, let’s take a look at the existing Promise.all() method.
Promise.all() takes an iterable of promises as an input and returns a single promise that resolves to an array of the results of the input promises.
The following program calls Promise.all() on two resolved promises:
function myPromise(delay) { return new Promise((resolve) => setTimeout( () => resolve({ data: `The data from ${delay} ms delay`, }), delay ) ); } async function getData() { try { const data = await Promise.all([myPromise(5000), myPromise(100)]); console.log(data); } catch (error) { console.log(error); } } getData();
The Promise.all() returned promise will resolve when all of the input’s promises have resolved, or if the input iterable contains no promises:
$ node myPromise.js [ { data: 'The data from 5000 ms delay' }, { data: 'The data from 100 ms delay' } ]
The following program calls Promise.all() on two rejected promises.
function myPromise(delay) { return new Promise((_, reject) => setTimeout( () => reject({ error: `The error from ${delay} ms delay`, }), delay ) ); } async function getData() { try { const data = await Promise.all([myPromise(5000), myPromise(100)]); console.log(data); } catch (error) { console.log(error); } } getData();
Promise.all() immediately rejects any of the input promises rejecting or non-promises throwing an error, and will reject with this first rejection message or error:
$ node myPromise.js { error: 'The error from 100 ms delay' }
Promise.any() is new in Node.js 15. This is the opposite of Promise.all(). It takes an iterable of promises and, as soon as one of the promises in the iterable fulfills, returns a single promise that resolves with the value from that promise.
The following program calls Promise.any() on two resolved promises:
function myPromise(delay) { return new Promise((resolve) => setTimeout( () => resolve({ data: `The error from ${delay} ms delay`, }), delay ) ); } async function getData() { try { const data = await Promise.any([myPromise(5000), myPromise(100)]); console.log(data); } catch (error) { console.log(error); console.log(error.errors); } } getData();
Promise.any() returns the first resolved promise:
$ node myPromise.js { data: 'The error from 100 ms delay' }
The following program calls Promise.any() on two rejected promises:
function myPromise(delay) { return new Promise((_, reject) => setTimeout( () => reject({ error: `The error from ${delay} ms delay`, }), delay ) ); } async function getData() { try { const data = await Promise.any([myPromise(5000), myPromise(100)]); console.log(data); } catch (error) { console.log(error); console.log(error.errors); } } getData();
If no promises in the iterable are fulfilled — i.e. all of the given promises are rejected — the returned promise is rejected with an AggregateError, a new subclass of Error that groups together individual errors.
$ node myPromise.js [AggregateError: All promises were rejected] [ { error: 'The error from 5000 ms delay' }, { error: 'The error from 100 ms delay' } ]
In the previous examples, we used setTimeout inside the promise call. The WindowOrWorkerGlobalScope’s setTimeout uses a callback. However, timers/promises provides a promisified version of setTimeout, which can be used with async/await.
const { setTimeout } = require('timers/promises'); async function myPromise(delay) { await setTimeout(delay); return new Promise((resolve) => { resolve({ data: `The data from ${delay} ms delay`, }); }); } async function getData() { try { const data = await Promise.any([myPromise(5000), myPromise(100)]); console.log(data); } catch (error) { console.log(error); console.log(error.errors); } } getData();
AbortController is a JavaScript object that allows us to abort one or more web requests as and when desired. We gave examples of how to use AbortController on the topic of useAsync.
Both await setTimeout and AbortController are experimental features.
First, let’s take a look at the existing String.prototype.replace() method.
replace() returns a new string with some or all matches of a pattern replaced by a replacement. The pattern can be a string or a regular expression. The replacement can be a string or a function to be called for each match.
If the pattern is a string, only the first occurrence will be replaced.
'20+1+2+3'.replace('+', '-');
Executing the above statement will yield “20–1+2+3”.
In order to replace all ‘+’ with ‘-’, a regular expression has to be used.
'20+1+2+3'.replace(/\+/g, '-');
Execute the above statement will yield “20–1-2-3”.
replaceAll() is new in Node.js 15 to avoid using the regular expression. It returns a new string with all matches of a pattern replaced by a replacement. The pattern can be a string or a regular expression and the replacement can be a string or a function to be called for each match.
With replaceAll(), we do not have to use a regular expression to replace all ‘+’ with ‘-’.
'20+1+2+3'.replaceAll('+', '-');
Executing the above statement will yield “20–1-2-3”.
Logical assignment operators &&=, ||=, and ??=
A few logical assignment operators have been added to Node.js 15.
The logical AND assignment (x &&= y) operator only assigns if x is truthy. x &&= y is equivalent to x && (x = y), and it is not equivalent to x = x && y.
let x = 0; let y = 1; x &&= 0; // 0 x &&= 1; // 0 y &&= 1; // 1 y &&= 0; // 0
The logical OR assignment (x ||= y) operator only assigns if x is falsy. x ||= y is equivalent to x || (x = y), and it is not equivalent to x = x || y.
let x = 0; let y = 1; x ||= 0; // 0 x ||= 1; // 1 y ||= 1; // 1 y ||= 0; // 1
The logical nullish assignment (x ??= y) operator only assigns if x is nullish (null or undefined). x ??= y is equivalent to x ?? (x = y), and it is not equivalent to x = x ?? y.
let x = undefined; let y = ''; x ??= null; // null x ??= 'a value'; // "a value" y ??= undefined; // "" y ??= null; // ""
For more information and to develop web application using Node JS, Hire Node Developer from us as we give you a high-quality product by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”. To develop custom web apps using Node JS, please visit our Hire Node Developer technology page.
Content Source:
57 Sherway St,
Stoney Creek, ON
L8J 0J3
606, Suvas Scala,
S P Ring Road, Nikol,
Ahmedabad 380049
1131 Baycrest Drive,
Wesley Chapel,
FL 33544
57 Sherway St,
Stoney Creek, ON
L8J 0J3
606, Suvas Scala,
S P Ring Road, Nikol,
Ahmedabad 380049
1131 Baycrest Drive,
Wesley Chapel,
FL 33544
© 2024 — HK Infosoft. All Rights Reserved.
© 2024 — HK Infosoft. All Rights Reserved.
T&C | Privacy Policy | Sitemap