Our Global Presence
Canada
57 Sherway St,
Stoney Creek, ON
L8J 0J3
India
606, Suvas Scala,
S P Ring Road, Nikol,
Ahmedabad 380049
USA
1131 Baycrest Drive,
Wesley Chapel,
FL 33544
The .NET landscape continues to evolve at a blistering pace as it released its latest update — .NET 8.0.1 on January 11th, 2024; thanks to LTS policy.
It’s more than just a patch update; it’s a powerhouse packed with performance improvements, developer-friendly tools, and exciting cross-platform capabilities.
Performance is no longer just a buzzword in .NET, rather it’s a tangible reality. The latest release shows reductions in garbage collection times. This is possible with its optimized algorithms and improved memory management. JIT compilation has also received a major overhaul, leading to faster application startup and smoother execution.
These enhancements translate to real-world improvements, with many applications experiencing up to 20% faster execution times compared to previous versions.
.NET 8.0.1 understands that developers are time-pressed beings. To ease their burdens, the platform introduces a list of productivity-boosting tools and features. Minimal APIs let you design lightweight web APIs with minimal code and boilerplate, reducing development time and maintenance headaches.
Hot Reload for ASP.NET Core eliminates the dreaded server restart cycle, allowing you to see code changes reflected instantly. This leads to a more fluid development workflow. Improved tooling for code analysis and debugging further adds to the efficiency mix, helping you identify and fix issues faster.
Gone are the days of juggling separate codebases for iOS, Android, Windows, and macOS. .NET MAUI (Multi-platform App UI) empowers you to build beautiful native mobile and desktop applications using a single codebase. This translates to massive savings in development time and effort, allowing you to focus on your app’s core functionality instead of platform-specific intricacies.
Imagine crafting a stunning mobile game or a feature-rich desktop application, all powered by the magic of .NET MAUI and your shared codebase.
.NET 8.0.1 simplifies the deployment process, making it easier than ever to get your creations onto various environments, including cloud platforms and containers. Improved container image support and smaller footprints allow for smoother deployments and efficient resource utilization. Whether you’re targeting Azure, AWS, or any other cloud platform, .NET 8.0.1 has your back.
.NET 8.0.1 offers a range of features to protect your applications from vulnerabilities. Enhanced cryptography libraries ensure secure data transmission and storage, while deprecation of insecure protocols and APIs minimizes potential attack vectors. Improved logging and auditing capabilities provide better visibility into your application’s security posture, allowing you to proactively identify and address potential threats.
In today’s digital landscape, security is paramount. And .NET 8.0.1 has got everything you need.
The list of innovations in .NET 8.0.1 goes beyond these highlights. Hardware allows modern CPUs and GPUs in boosting performance in areas like scientific computing and image processing. WPF hardware acceleration in RDP enhances remote application experiences, making them smoother and more responsive. And the improvements continue, from modernized asynchronous programming primitives to enhanced Blazor capabilities, offering developers a richer and more versatile platform.
.NET 8.0.1 is an invitation to build faster, more secure, and truly cross-platform applications with increased developer productivity. The future of application development is bright with .NET and this latest release proves it right.
For more information, please head over to our Hire .NET Developer page and to develop your dream project using ASP.NET, Hire .NET Developer at HK Infosoft – we are destined to provide you with an innovative solution using the latest technology stacks. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”.
Content Source:
Node.js 21 was released on October 17, 2023 and came with a number of new features and improvements. This includes experimental support for the WebSocket API, a new –experimental-default-type flag, and a number of updates to the test runner and llhttp parser.
Node.js 21 introduces experimental support for the WebSocket API, which allows you to connect to a WebSocket server from a Node.js application without resorting to third-party packages. This is an important innovation, as it allows real-time, two-way interactive communication between the client and server.
const WebSocket = require('ws'); const ws = new WebSocket('ws://localhost:8080'); ws.on('open', () => { console.log('WebSocket connection open'); }); ws.on('message', (message) => { console.log('Received message:', message); }); ws.on('close', () => { console.log('WebSocket connection closed'); });
By default, Node.js will interpret any code you enter or send it as an ES module if you do not specify otherwise, and if the package.json file is not present. The new –experimental-default-type flag allows you to override this behavior and specify that code should be interpreted as CommonJS instead.
To use the –experimental-default-type flag, you can use the following command:
node --experimental-default-type=commonjs my-script.js
This will cause the my-script.js file to be interpreted as CommonJS, even if it does not have an .extensions file extension.
Node.js 21 introduces a new –test-concurrency flag that allows you to specify the number of parallel processes that the test runner should use. This can be useful for improving performance on machines with multiple cores.
To use the –test-concurrency flag, you can use the following command:
node --test-concurrency=4 test
This will cause the test runner to execute tests in parallel using four processes.
Node.js 21 now supports passing globs as arguments to the test runner. This allows you to specify which test files to run based on patterns, rather than having to specify each file individually.
To run all test files that match the *.test.js pattern, you can use the following command:
node test *.test.js
Node.js 21 uses llhttp version 9.1.2, which introduces a number of new features and bug fixes. One of the most important changes is that llhttp 9.1.2 now has a strict mode that is enabled by default. This means that llhttp will now reject requests that are not formatted correctly.
If you are using a third-party library that sends malformed requests, you may need to disable strict mode in order to use it. To do this, you can use the following command:
node --insecure-http-parser my-script.js
Node.js 21 is a significant release that includes a number of new features and improvements. The experimental support for the WebSocket API is particularly exciting, as it opens up new possibilities for real-time applications.
For more information, please head over to our Hire Node Developer page and to develop a web application using Node JS, Hire Node Developer at HK Infosoft – we are destined to provide you with an innovative solution using the latest technology stacks. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”.
To develop Android Mobile Apps, please visit our technology page.
Content Source:
NET 7 is the successor to .NET 6 and focuses on being unified, modern, simple, and fast. .NET 7 will be supported for 18 months as a standard-term support (STS) release. .NET 7 was released on November 8, 2022 and is the latest version of the .NET platform. It includes a number of new features and improvements, including:
C# 11 includes a number of new features, such as:
.NET 7 includes a number of performance improvements, such as:
.NET 7 includes a number of new features for cloud-native development, such as:
.NET 7 includes a number of new features for desktop development, such as:
.NET 7 includes a number of new features for mobile development, such as:
For more information, please head over to our Hire .NET Developer page and to develop a website using ASP.NET, Hire .NET Developer at HK Infosoft – we are destined to provide you with an innovative solution using the latest technology stacks. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”.
Content Source:
In the rapidly evolving world of web development, developers are constantly on the lookout for frameworks that can provide both speed and efficiency. Enter Fastify, a lightweight and lightning-fast web framework for Node.js that has taken the development community by storm. If you’re a developer looking to create high-performance, scalable, and secure web applications, Fastify may be the game-changer you’ve been waiting for.
Fastify, developed by Matteo Collina and Tomas Della Vedova, is an open-source web framework for Node.js designed with a primary focus on speed and low overhead. Launched in 2016, Fastify has quickly gained popularity in the Node.js ecosystem due to its impressive performance, simplicity, and extensibility. It is built on top of Node.js’s HTTP module and takes full advantage of the latest JavaScript features to maximize its speed and efficiency.
// Require the framework and instantiate it const fastify = require("fastify")({ logger: true }); // Declare a rout fastify.get("/", async (request, reply) => { return { hello: "world" }; }); // Start the server fastify.listen(3000);
One of the primary reasons developers are flocking to Fastify is its exceptional performance. Thanks to its powerful and highly optimized core, Fastify boasts some of the fastest request/response times among Node.js frameworks. It leverages features like request validation, which is automatically generated from JSON schemas, to ensure that data is processed swiftly and accurately. Additionally, Fastify supports asynchronous programming and handles requests concurrently, making it ideal for handling heavy workloads and high traffic.
Fastify follows a minimalist approach, focusing on providing only the essential components needed to build web applications efficiently. Developers can opt-in to use various plugins to extend Fastify’s functionality as per their requirements. This approach not only keeps the core lightweight but also gives developers the flexibility to customize their stack with the specific tools they need. Furthermore, the ecosystem around Fastify is growing rapidly, with a wide array of plugins and middleware available, making it easy to integrate third-party tools seamlessly.
Fastify’s API is designed to be intuitive and easy to use, reducing the learning curve for developers. Its well-documented and expressive API allows developers to write clean, maintainable, and organized code. The framework’s emphasis on proper error handling and logging also contributes to its ease of use, helping developers quickly identify and rectify issues during development and production.
Data validation is a crucial aspect of web application development to ensure data integrity and security. Fastify utilizes JSON Schema for data validation, enabling developers to define the expected shape of incoming requests and responses. This not only simplifies the validation process but also automatically generates detailed and helpful error messages, making debugging a breeze.
Fastify is designed with security in mind. It encourages best practices such as using the latest cryptographic libraries and secure authentication mechanisms. Additionally, Fastify has a built-in protection mechanism against common web application attacks like Cross-Site Scripting (XSS) and Cross-Site Request Forgery (CSRF). With Fastify, developers can rest assured that their applications are less prone to security vulnerabilities.
Fastify’s emergence as a top-tier web framework for Node.js is no coincidence. Its commitment to speed, minimalism, and extensibility sets it apart from the competition. Whether you’re building a small-scale API or a large-scale application, Fastify’s performance, easy-to-use API, and emphasis on security make it an excellent choice.
In the fast-paced world of web development, having a framework that can boost productivity and deliver top-notch performance is essential. Fastify has proven itself as a reliable and efficient framework, providing developers with the tools they need to create high-performance applications without compromising on code quality and security.
So, if you’re ready to take your Node.js projects to the next level, give Fastify a try, and experience the speed and power it brings to your development workflow.
For more information and to develop web applications using Node.js, Hire Node.js Developer from us as we give you a high-quality product by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”.
To develop custom web apps using Node.js, please visit our technology page.
Content Source:
This transition is intended to ease the maintenance burden on the community and challenge our development team to ship amazing, powerful new features without introducing breaking changes. Therefore, we have shipped a variety of robust features to Laravel 9 without breaking backwards compatibility.
Therefore, this commitment to ship great new features during the current release will likely lead to future “major” releases being primarily used for “maintenance” tasks such as upgrading upstream dependencies, which can be seen in these release notes.
Laravel 10 continues the improvements made in Laravel 9.x by introducing argument and return types to all application skeleton methods, as well as all stub files used to generate classes throughout the framework. In addition, a new, developer-friendly abstraction layer has been introduced for starting and interacting with external processes.
PHP 8.1 is the minimum-required PHP version in Laravel 10. Some PHP 8.1 features, such as readonly properties and array_is_list, are used in Laravel 10.
Not only is the framework professionally maintained and updated on a regular basis, but so are all of the official packages and the ecosystem.
The following is a list of the most recent official Laravel packages that have been updated to support Laravel 10:
Predis is a robust Redis client for PHP that may help you get the most out of caching to provide a fantastic user experience. Laravel formerly supported both versions 1 and 2, but as of Laravel 10, the framework no longer supports Predis 1.
Although Laravel documentation mentions Predis as the package for interacting with Redis, you may also use the official PHP extension. This extension provides an API for communicating with Redis servers.
If you were to make an invokable validation rule in Laravel 9, you would need to add an –invokable flag after the Artisan command. This is no longer necessary because all Laravel 10 rules are invokable by default. So, you may run the following command to create a new invokable rule in Laravel 10:
php artisan make:rule CustomRule
On its initial release, Laravel utilized all of the type-hinting features available in PHP at the time. However, many new features have been added to PHP in the subsequent years, including additional primitive type-hints, return types, and union types.
Laravel 10.x thoroughly updates the application skeleton and all stubs utilized by the framework to introduce argument and return types to all method signatures. In addition, extraneous “doc block” type-hint information has been deleted.
This change is entirely backwards compatible with existing applications. Therefore, existing applications that do not have these type-hints will continue to function normally.
The Artisan test command has received a new –profile option that allows you to easily identify the slowest tests in your application:
php artisan test --profile
To improve the framework’s developer experience, all of Laravel’s built-in make commands no longer require any input. If the commands are invoked without input, you will be prompted for the required arguments:
php artisan make:controller
Laravel 10 can create a random and secure password with a given length:
$password = Str::password(12);
A new first-party package, Laravel Pennant, has been released. Laravel Pennant offers a light-weight, streamlined approach to managing your application’s feature flags. Out of the box, Pennant includes an in-memory array driver and a database driver for persistent feature storage.
Features can be easily defined via the Feature::define method:
use Laravel\Pennant\Feature; use Illuminate\Support\Lottery; Feature::define('new-onboarding-flow', function () { return Lottery::odds(1, 10); });
Once a feature has been defined, you may easily determine if the current user has access to the given feature:
if (Feature::active('new-onboarding-flow')) { // ... }
Of course, for convenience, Blade directives are also available:
@feature('new-onboarding-flow') <div> <!-- ... --> </div> @endfeature
Laravel 10.x introduces a beautiful abstraction layer for starting and interacting with external processes via a new Process facade:
use Illuminate\Support\Facades\Process; $result = Process::run('ls -la'); return $result->output();
Processes may even be started in pools, allowing for the convenient execution and management of concurrent processes:
use Illuminate\Process\Pool; use Illuminate\Support\Facades\Process; [$first, $second, $third] = Process::concurrently(function (Pool $pool) { $pool->command('cat first.txt'); $pool->command('cat second.txt'); $pool->command('cat third.txt'); }); return $first->output();
Horizon and Telescope have been updated with a fresh, modern look including improved typography, spacing, and design
Pest test scaffolding is now enabled by default when creating new Laravel projects. To enable this feature, use the –pest flag when building a new app with the Laravel installer:
laravel new example-application --pest
Laravel 10 is a significant release for the Laravel framework, and it comes with several new features and improvements that will help developers create more robust and efficient web applications. The introduction of BladeX, model event hooks, and improved routing features make it easier for developers to build modular and scalable applications. The dropping of PHP 8.0 support is also a significant decision that ensures that developers are using the latest version of PHP, which is more secure and efficient. As always, Laravel continues to evolve and innovate, making it an excellent choice for web development projects.
For more information and to develop web applications using Laravel, Hire Laravel Developer from us as we give you a high-quality product by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”.
To develop custom web apps using Laravel, please visit our technology page.
Content Source:
In the realm of server-side JavaScript, Node.js has become a dominant force, revolutionizing the way we build web applications. With each new version, Node.js brings forth exciting enhancements, improved performance, and expanded capabilities. In this blog, we’ll embark on a journey through the evolution of Node.js, exploring the advancements that have led to the highly anticipated Node 20. We’ll delve into the key features of Node 20 and showcase an example that demonstrates its potential.
Since its initial release in 2009, Node.js has evolved significantly, shaping the landscape of JavaScript development. The first versions of Node.js introduced a non-blocking, event-driven architecture, enabling developers to build highly scalable and efficient applications. With its growing popularity, Node.js gained a vibrant ecosystem of modules and libraries, making it a versatile platform for both back-end and full-stack development.
As Node.js progressed, new features were introduced to enhance performance, security, and developer productivity. For instance, Node.js 8 introduced the Long-Term Support (LTS) release, which provided stability and backward compatibility. Node.js 10 brought improvements in error handling and diagnostic reports, making it easier to identify and resolve issues. Node.js 12 introduced enhanced default heap limits and improved performance metrics.
Now, let’s turn our attention to Node 20, the latest iteration of Node.js, and explore its groundbreaking features that are set to shape the future of JavaScript development.
– Improved Performance and Speed:
– Enhanced Security:
– Improved Debugging Capabilities:
– ECMAScript Modules (ESM) Support:
– Enhanced Worker Threads:
– Stable Test Runner
– url.parse() Warns URLs With Ports That Are Not Numbers
Let’s explore what they are and how to use them.
– Node.js 20 incorporates the latest advancements in the V8 JavaScript engine, resulting in significant performance improvements. Let’s take a look at an example:
// File: server.js const http = require('http'); const server = http.createServer((req, res) =&amp;gt; { res.writeHead(200, { 'Content-Type': 'text/plain' }); res.end('Hello, world!'); }); server.listen(3000, () =&amp;gt; { console.log('Server running on port 3000'); });
By leveraging the performance optimizations in Node.js 20, applications like the one above experience reduced response times and enhanced scalability, resulting in an improved user experience.
Security is a top priority for any application, and Node.js 20 introduces several features to bolster its security. One noteworthy enhancement is the upgraded TLS implementation, ensuring secure communication between servers and clients. Here’s an example of using TLS in Node.js 20:
// File: server.js const https = require('https'); const fs = require('fs'); const options = { key: fs.readFileSync('private.key'), cert: fs.readFileSync('certificate.crt') }; const server = https.createServer(options, (req, res) =&amp;gt; { res.writeHead(200, { 'Content-Type': 'text/plain' }); res.end('Secure Hello, world!'); }); server.listen(3000, () =&amp;gt; { console.log('Server running on port 3000'); });
With the upgraded TLS implementation, Node.js 20 ensures secure data transmission, safeguarding sensitive information.
Node.js 20 introduces enhanced diagnostic and debugging capabilities, empowering developers to pinpoint and resolve issues more effectively. Consider the following example:
// File: server.js const { performance, PerformanceObserver } = require('perf_hooks'); const obs = new PerformanceObserver((items) =&amp;gt; { console.log(items.getEntries()[0].duration); performance.clearMarks(); }); obs.observe({ entryTypes: ['measure'] }); performance.mark('start'); // ... Code to be measured ... performance.mark('end'); performance.measure('Duration', 'start', 'end');
In this example, the Performance API allows developers to measure the execution time of specific code sections, enabling efficient optimization and debugging.
Node.js 20 embraces ECMAScript Modules (ESM), providing a standardized approach to organize and reuse JavaScript code. Let’s take a look at an example:
// File: module.js export function greet(name) { return `Hello, ${name}!`; } // File: app.js import { greet } from './module.js'; console.log(greet('John'));
With ESM support, developers can now leverage the benefits of code encapsulation and organization in Node.js, facilitating better code reuse and maintenance.
Node.js 20 introduces improved worker threads, enabling true multi-threading capabilities within a Node.js application. Consider the following example:
// File: worker.js const { Worker, isMainThread, parentPort, workerData } = require('worker_threads'); if (isMainThread) { const worker = new Worker(__filename, { workerData: 'Hello, worker!' }); worker.on('message', message =&amp;gt; console.log(message)); } else { parentPort.postMessage(workerData); }
In this example, the main thread creates a worker thread that receives data and sends a message back. With enhanced worker threads, Node.js 20 empowers developers to harness the full potential of multi-core processors, improving application performance.
Node.js 20 includes an important change to the test_runner module. The module has been marked as stable after a recent update. Previously, the test_runner module was experimental, but this change marks it as a stable module ready for production use.
url.parse() accepts URLs with ports that are not numbers. This behavior might result in hostname spoofing with unexpected input. These URLs will throw an error in future versions of Node.js, as the WHATWG URL API already does. Starting with Node.js 20, these URLS cause url.parse() to emit a warning.
Here is urlParse.js:
const url = require('node:url'); url.parse('https://example.com:80/some/path?pageNumber=5'); // no warning url.parse('https://example.com:abc/some/path?pageNumber=5'); // show warning
Execute node urlParse.js . https://example.com:80/some/path?pageNumber=5 with a numerical port does not show a warning, but https://example.com:abc/some/path?pageNumber=5 with a string port shows a warning.
% node urlParse.js (node:21534) [DEP0170] DeprecationWarning: The URL https://example.com:abc/some/path?pageNumber=5 is invalid. Future versions of Node.js will throw an error. (Use `node --trace-deprecation ...` to show where the warning was created)
Conclusion
Node.js 20 brings a plethora of innovative features and enhancements that revolutionize the way developers build applications. Improved performance, enhanced security, advanced debugging capabilities, ECMAScript Modules support, and enhanced worker threads open up new possibilities for creating scalable, secure, and high-performing applications. By leveraging these cutting-edge features, developers can stay at the forefront of modern web development and deliver exceptional user experiences. Upgrade to Node.js 20 today and unlock a new era of JavaScript development!
For more information and to develop web applications using Node JS, Hire Node Developer from us as we give you a high-quality product by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”.
To develop your custom web app using Node JS, please visit our technology page.
Content Source:
Node.js major release is rolled out every six months. The new release becomes the Current release for six months, which gives library authors time to add support for them.
After six months, odd-numbered releases, such as 19, become unsupported, and even-numbered releases, such as 18, move to the Active LTS (long-term support) status and are ready for general use.
LTS release typically guarantees that critical bugs will be fixed for a total of 30 months. Production applications should only use Active LTS or Maintenance LTS releases.
Node.js 19 was released recently which comes with 6 major features:
Let’s explore what they are and how to use them.
Run the command to install node 19.0.0:
% nvm install 19.0.0 Downloading and installing node v19.0.0... Downloading https: //nodejs.org/dist/v19.0.0/node-v19.0.0-darwin-x64.tar.xz... Computing checksum with sha256sum Checksums matched! Now using node v19.0.0 (npm v8.19.2) |
On any window, run the command to use node 19:
% nvm use 19 Now using node v19.0.0 (npm v8.19.2) |
Now you’re ready to explore:
% node --version v19.0.0 |
In A Hands-on Guide for a Server-Side Rendering React 18 App, you have to build a production Create React App by executing npm run build.
You can created server/index.js:
const express = require ( "express" ); const path = require ( "path" ); const app = express(); app. use (express. static (path.join(__dirname, "../build" ))); app.listen(8080, () =&amp;amp;amp;gt; console.log( "Express server is running on localhost:8080" ) ); |
The server was running with nodemon, a tool that helps to develop Node.js applications by automatically restarting the application when file changes are detected. The command is nodemon server.
With node.js 19, you no longer need to install the additional tool. Instead, you can execute node –watch to automatically restart the application when file changes are detected.
% node --watch server (node:67643) ExperimentalWarning: Watch mode is an experimental feature. This feature could change at any time (Use `node --trace-warnings ...` to show where the warning was created) Express server is running on localhost:8080 |
Node.js 19 sets keepAlive to true by default. This means that any outgoing HTTP(s) connection will automatically use HTTP 1.1 keepAlive. The default keepAlive duration is 5 seconds.
Change the above server/index.js to:
const http = require ( 'node:http' ); console.log(http.globalAgent); const https = require ( 'node:https' ); console.log(https.globalAgent); |
Execute the server code using node.js 16:
% nvm use 16 Now using node v16.0.0 (npm v7.10.0) % node server Agent { _events: [Object: null prototype] { free: [Function (anonymous)], newListener: [Function: maybeEnableKeylog] }, _eventsCount: 2, _maxListeners: undefined, defaultPort: 80, protocol: 'http:' , options: [Object: null prototype] { path: null }, requests: [Object: null prototype] {}, sockets: [Object: null prototype] {}, freeSockets: [Object: null prototype] {}, keepAliveMsecs: 1000, keepAlive: false, maxSockets: Infinity, maxFreeSockets: 256, scheduling: 'lifo' , maxTotalSockets: Infinity, totalSocketCount: 0, [Symbol(kCapture)]: false } Agent { _events: [Object: null prototype] { free: [Function (anonymous)], newListener: [Function: maybeEnableKeylog] }, _eventsCount: 2, _maxListeners: undefined, defaultPort: 443, protocol: 'https:' , options: [Object: null prototype] { path: null }, requests: [Object: null prototype] {}, sockets: [Object: null prototype] {}, freeSockets: [Object: null prototype] {}, keepAliveMsecs: 1000, keepAlive: false, maxSockets: Infinity, maxFreeSockets: 256, scheduling: 'lifo' , maxTotalSockets: Infinity, totalSocketCount: 0, maxCachedSessions: 100, _sessionCache: { map: {}, list: [] }, [Symbol(kCapture)]: false } |
Execute the server code using node.js 19:
% nvm use 19 Now using node v19.0.0 (npm v8.19.2) % node server Agent { _events: [Object: null prototype] { free: [Function (anonymous)], newListener: [Function: maybeEnableKeylog] }, _eventsCount: 2, _maxListeners: undefined, defaultPort: 80, protocol: 'http:' , options: [Object: null prototype] { keepAlive: true, scheduling: 'lifo' , timeout: 5000, noDelay: true, path: null }, requests: [Object: null prototype] {}, sockets: [Object: null prototype] {}, freeSockets: [Object: null prototype] {}, keepAliveMsecs: 1000, keepAlive: true, maxSockets: Infinity, maxFreeSockets: 256, scheduling: 'lifo' , maxTotalSockets: Infinity, totalSocketCount: 0, [Symbol(kCapture)]: false } Agent { _events: [Object: null prototype] { free: [Function (anonymous)], newListener: [Function: maybeEnableKeylog] }, _eventsCount: 2, _maxListeners: undefined, defaultPort: 443, protocol: 'https:' , options: [Object: null prototype] { keepAlive: true, scheduling: 'lifo' , timeout: 5000, noDelay: true, path: null }, requests: [Object: null prototype] {}, sockets: [Object: null prototype] {}, freeSockets: [Object: null prototype] {}, keepAliveMsecs: 1000, keepAlive: true, maxSockets: Infinity, maxFreeSockets: 256, scheduling: 'lifo' , maxTotalSockets: Infinity, totalSocketCount: 0, maxCachedSessions: 100, _sessionCache: { map: {}, list: [] }, [Symbol(kCapture)]: false } |
Enable keepAlive will deliver better throughput as connections are reused by default.
Additionally, the agent is able to parse the response keepAlive that the servers might send. This header instructs the client on how much longer to stay connected.
On the other side, the HTTP server will automatically disconnect idle clients when close() is invoked. It is accomplished by http(s).Server.close calling closeIdleConnections internally.
With these changes, HTTP(S)/1.1 requests may experience a better throughput/performance by default.
The WebCrypto API is an interface to build systems using cryptography. With node.js 19, the WebCrypto API is stable (with the exception of these algorithms: Ed25519, Ed448, X25519, and X448).
You can use globalThis.crypto or require(‘node:crypto’).webcrypto to access this module. The following server/index.js use subtle as an example, where the SubtleCrypto interface provides a number of low-level cryptographic functions:
const { subtle } = globalThis.crypto; (async function () { const key = await subtle.generateKey({ name: 'HMAC' , hash: 'SHA-256' , length: 256 }, true, [ 'sign' , 'verify' ]); console.log( 'key =' , key); const enc = new TextEncoder(); const message = enc.encode( 'I love cupcakes' ); console.log( 'message =' , message); const digest = await subtle.sign({ name: 'HMAC' }, key, message); console.log( 'digest =' , digest); })(); |
The following console information shows the values of key, message, and digest:
% node server key = CryptoKey { type: 'secret' , extractable: true, algorithm: { name: 'HMAC' , length: 256, hash: [Object] }, usages: [ 'sign' , 'verify' ] } message = Uint8Array(15) [ 73, 32, 108, 111, 118, 101, 32, 99, 117, 112, 99, 97, 107, 101, 115 ] digest = ArrayBuffer { [Uint8Contents]: &amp;amp;lt;30 01 7a 5c d9 e2 82 55 6b 55 90 4f 1d de 36 d7 89 dd fb fb 1a 9e a0 cc 5d d8 49 13 38 2f d1 bc&amp;amp;gt;, byteLength: 32 } |
Node.js has removed the –experimental-specifier-resolution flag, because its functionality can be achieved via custom loaders.
Clone the example repository:
git clone https://github.com/nodejs/loaders-test.git
Go to the example directory:
% cd loaders-test/commonjs-extension-resolution-loader
Install the packages:
% yarn install
Here is loaders-test/commonjs-extension-resolution-loader/test/basic-fixtures/index.js:
import { version } from 'process' ; import { valueInFile } from './file' ; import { valueInFolderIndex } from './folder' ; console.log(valueInFile); console.log(valueInFolderIndex); |
Here is loaders-test/commonjs-extension-resolution-loader/test/basic-fixtures/file.js:
export const valueInFile = ‘hello from file.js’;
Here is loaders-test/commonjs-extension-resolution-loader/test/basic-fixtures/folder/index.js:
export const valueInFolderIndex = ‘hello from folder/index.js’;
We have mentioned in another article that there are two ways to execute ESM code:
Regardless, the following two commands will fail.
% node test/basic-fixtures/index % node test/basic-fixtures/index.js
However, all these issues can be resolved by the custom loader, loaders-test/commonjs-extension-resolution-loader/loader.js:
import { isBuiltin } from 'node:module' ; import { dirname } from 'node:path' ; import { cwd } from 'node:process' ; import { fileURLToPath, pathToFileURL } from 'node:url' ; import { promisify } from 'node:util' ; import resolveCallback from 'resolve/async.js' ; const resolveAsync = promisify(resolveCallback); const baseURL = pathToFileURL(cwd() + '/' ).href; export async function resolve(specifier, context, next) { const { parentURL = baseURL } = context; if (isBuiltin(specifier)) { return next(specifier, context); } // `resolveAsync` works with paths, not URLs if (specifier.startsWith( 'file://' )) { specifier = fileURLToPath(specifier); } const parentPath = fileURLToPath(parentURL); let url; try { const resolution = await resolveAsync(specifier, { basedir: dirname(parentPath), // For whatever reason, --experimental-specifier-resolution=node doesn't search for .mjs extensions // but it does search for index.mjs files within directories extensions: [ '.js' , '.json' , '.node' , '.mjs' ], }); url = pathToFileURL(resolution).href; } catch (error) { if (error.code === 'MODULE_NOT_FOUND' ) { // Match Node's error code error.code = 'ERR_MODULE_NOT_FOUND' ; } throw error; } return next(url, context); } |
With the loader, the above failed commands work well:
% node --loader=./loader.js test/basic-fixtures/index (node:56149) ExperimentalWarning: Custom ESM Loaders is an experimental feature. This feature could change at any time (Use `node --trace-warnings ...` to show where the warning was created) hello from file.js hello from folder/index.js % node --loader=./loader.js test/basic-fixtures/index.js (node:56160) ExperimentalWarning: Custom ESM Loaders is an experimental feature. This feature could change at any time (Use `node --trace-warnings ...` to show where the warning was created) hello from file.js hello from folder/index.js |
With custom loaders, there is no need for the –experimental-specifier-resolution flag.
For the following two reasons, Node.js has dropped the support for DTrace/SystemTap/ETW:
Node.js 19 has updated V8 JavaScript engine to V8 10.7, which includes a new function, Intl.NumberFormat, for language-sensitive number formatting.
Intl.NumberFormat(locales, options)
locales is an optional parameter, which is a BCP 47 language tag, or an array of such strings. Here is the BCP 47 language tag list:
options is also an optional parameter, which is an object with some or all of these properties: compactDisplay, currency, currencyDisplay, currencySign, localeMatcher, notation, numberingSystem, signDisplay, style, unit, unitDisplay, useGrouping, roundingMode, roundingPriority, roundingIncrement, trailingZeroDisplay, minimumIntegerDigits, minimumFractionDigits, maximumFractionDigits, minimumSignificantDigits, and maximumSignificantDigits.
Among them, style chooses the formatting style, with the following supported values:
If style is set to ‘currency’, the currency property is required. currency takes ISO 4217 currency code that is listed in this table. By default, minimumFractionDigits and maximumFractionDigits are both set to 2.
Let’s take a look at this example:
const number = 123456.789; console.log( new Intl.NumberFormat( 'de-DE' , { style: 'currency' , currency: 'EUR' }).format(number)); console.log( new Intl.NumberFormat( 'ja-JP' , { style: 'currency' , currency: 'JPY' }).format(number)); console.log( new Intl.NumberFormat( 'ar-SA' , { style: 'currency' , currency: 'EGP' }).format(number)); console.log( new Intl.NumberFormat( 'zh-CN' , { style: 'currency' , currency: 'CNY' }).format(number)); |
For more information and to develop web applications using Node JS, Hire Node Developer from us as we give you a high-quality product by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”.
To develop your custom web app using Node JS, please visit our technology page.
Content Source:
.NET 6 is the latest LTS (Long Term Support) release currently and will be supported until November 12, 2024.
This API will manage movie records stored in a relational database (SQL Server) as described in the table below:
The sections of this post will be as follows:
You will need the following tools installed on your computer:
RESTful APIs conform to the REST architectural style.
REST, or REpresentational State Transfer, is an architectural style for providing standards between computer systems on the web, making it easier for systems to communicate with each other.
REST relies on client-server relationship. This essentially means that client application and server application must be able to evolve separately without any dependency on each other.
REST is stateless. That means the communication between the client and the server always contains all the information needed to perform the request. There is no session state in the server, it is kept entirely on the client’s side.
REST provides a uniform interface between components. Resources expose directory structure-like URIs.
REST is not strictly related to HTTP, but it is most commonly associated with it. There are four basic HTTP verbs we use in requests to interact with resources in a REST system:
In a REST system, representations transfer JSON or XML to represent data objects and attributes.
REST has had such a large impact on the Web that it has mostly displaced SOAP-based interface design because it’s a considerably simpler style to use.
Open Visual Studio 2022 and select Create a new project and then select ASP.NET Core Web API:
and give a name to your project in the following screen and then click Next.
In the next screen, select .NET 6.0 as the framework and click Create:
At this point you have a starter project as follows:
In the Program.cs you can see that Swagger support is added automatically to your project:
And also Swashbuckle.AspNetCore NuGet package is added as a dependency.
Now, let’s run (Ctlr+F5) the project to see the default output. When the browser opens and the Swagger UI is shown, select the GET method in the WeatherForecast part and then select Try It Out and Execute:
Also, you can use the curl URL shown in the Swagger UI for this method and see the result of the URL in the browser:
When you run the application, the default URL comes from the launchSettings.json:
And the result values come from the GET method of the WeatherForecastController:
As you see, values here are hard coded and randomness is added to generate different values.
In your Web API, you will create your own records in an SQL server database and will be able to view, update and delete them through REST API endpoints.
Now, you will implement your data model class.
In Solution Explorer, right-click the project. Select Add -> New Folder and name the folder Models.
Then right-click the Models folder and select Add->Class. Name the class Movie.cs and click Add.
Next, add the following properties to the class:
The Id field is required by the database for the primary key.
You will use your model with Entity Framework Core (EF Core) to work with a database.
EF Core is an object-relational mapping (ORM) framework that simplifies the data access code. Model classes don’t have any dependency on EF Core. They just define the properties of the data that will be stored in the database.
In this post, you will write the model classes first and EF Core will create the database. This is called Code First Approach.
Let’s add the EF Core NuGet packages to the project. Right-click on the project and select Manage NuGet Packages… and then install the following packages:
The database context is the main class that coordinates Entity Framework functionality for a data model. This class is created by deriving from Microsoft.EntityFrameworkCore.DbContext class.
Now, right-click the Models folder and select Add ->Class. Name the class MovieContext and click Add. Then add the following code to the class:
The preceding code creates a DbSet<Movie> property for the entity set.
In Entity Framework terminology, an entity set typically corresponds to a database table and an entity corresponds to a row in the table.
The name of the connection string is passed into the context by calling a method on a DbContextOptions object. For local development, the ASP.NET Core configuration system reads the connection string from the appsettings.json file.
We need to add our connection string to the appsettings.json. You will use the local SQL server instance in your machine and you can define the connection string as follows:
You can change the database name if you want.
ASP.NET Core is built with Dependency Injection (DI). Services (such as the EF Core DB context) are registered with DI during application startup. Components that require these services are provided with these services via constructor parameters.
Now, you will register your database context to the built-in IOC container. Add the following code to Program.cs:
Now, you will create the database using the EF Core Migrations feature.
Migrations lets us create a database that matches our data model and update the database schema when our data model changes.
First, you will add an initial Migration.
Open Tools -> NuGet Package Manager > Package Manager Console(PMC) and run the following command in the PMC:
Add-Migration Initial
The Add-Migration command generates code to create the initial database schema which is based on the model specified in the MovieContext class. The Initial argument is the migration name and any name can be used.
After running the command, a migration file is created under the Migrations folder:
As the next step, run the following command in the PMC:
Update-Database
The Update-Database command runs the Up method in the Migrations/{time-stamp}_Initial.cs file, which creates the database.
Now, you will check the database created. Open View -> SQL Server Object Explorer.
You will see the newly created database as below:
As you see, the Movie table and the Migrations History table are created automatically. Then a record is inserted into the migration history table to show the executed migrations on the database.
In this section, you will create the Movies API Controller and add the methods to it, and also will test those methods.
Let’s add the controller first. Right-click on the Controller folder and select Add -> Controller.. and then select API Controller – Empty as below:
Click Add and give a name to your controller on the next screen.
MoviesController is created as below:
As you see, the class is decorated with the [ApiController] attribute. This attribute indicates that the controller responds to web API requests.
MoviesController class inherits from ControllerBase.
Next, we will inject the database context mentioned in the previous section through the constructor of the controller. Add the following code:
Now, you will add CRUD (create, read, update, and delete) action methods to the controller. Let’s start with the GET methods.
Add the following code to the MoviesController:
GetMovies method returns all the movies and GetMovie(int id) method returns the movie having the Id given as input. They are decorated with the [HttpGet] attribute which denotes that a method responds to an HTTP GET request.
These methods implement two GET endpoints:
You can test the app by calling the two endpoints from a browser as follows:
The return type of the GetMovie methods is ActionResult<T> type. ASP.NET Core automatically serializes the object to JSON and writes the JSON into the body of the response message. The response code for this return type is 200, assuming there are no unhandled exceptions. Unhandled exceptions are translated into 5xx errors.
Routing and URL Paths
The URL path for each method is constructed as follows:
Start with the template string in the controller’s Route attribute (Route(“api/[controller]”)). Then replace [controller] with the name of the controller, which by convention is the controller class name minus the Controller suffix. For this sample, the controller class name is MoviesController, so the controller name is movies.
ASP.NET Core routing is case insensitive.
Testing the GetMovie Method
Now you will test these endpoints. Before that, let’s insert some movie records into your table.
Go to the SQL Server Object Explorer and right-click the Movies table and select View Data:
Then add some movie records manually to the table:
You do not need to add data for the Id column as SQL Server automatically handles this for us.
Now, you can test the GET endpoints. Start (Ctlr+F5) the application:
Select the first GET method and click Try it out -> Execute:
This shows all of the movies in the application.
Next, click the second GET method and click Try it out and enter one of the Ids above in the id field and click Execute:
If no item matches the requested Id, the method returns a 404 NotFound error code.
Add the following code to the MoviesController:
PostMovie method creates a movie record in the database. The preceding code is an HTTP POST method, as indicated by the [HttpPost] attribute. The method gets the value of the movie record from the body of the HTTP request.
The CreatedAtAction method:
Testing the PostMovie Method
Start the application and then select the POST method in the Movies section.
Click Try it out and enter the movie information that you want to add in the request body:
and click Execute.
Response status code is 201 (Created) and a location header is added to the response as seen below:
You can paste this location URL in the browser and see the response there too:
Also, you can check this record from the Movies table in your local database:
Add the following code to the MoviesController:
PutMovie method updates the movie record with the given Id in the database. The preceding code is an HTTP PUT method, as indicated by the [HttpPut] attribute. The method gets the value of the movie record from the body of the HTTP request. You need to supply the Id both in the request URL and the body and they have to match. According to the HTTP specification, a PUT request requires the client to send the entire updated entity, not just the changes.
The response is 204 (No Content) if the operation is successful.
Testing the PutMovie Method
Start the application and then select the PUT method in the Movies section.
Click Try it out and enter the movie information that you want to update in the request body and the Id of the movie in the id field:
and then click Execute.
We can check the updated state of the movie from GET method with Id in the Swagger UI or directly from the browser as below:
We can see the updated info in the database as well:
If you try to update a record that does not exist in the database you get 404 Not Found error:
Add the following code to the MoviesController:
DeleteMovie method deletes the movie record with the given Id in the database. The preceding code is an HTTP DELETE method, as indicated by the [HttpDelete] attribute. This method expects Id in the URL to identify the movie record we want to delete.
Testing the DeleteMovie Method
Start the application and then select the DELETE method in the Movies section.
Click Try it out and enter the Id of the movie you want to delete in the id field:
and then click Execute.
We do not need to supply a request body as you might have noticed. The response status is 204 No Content.
If you try to get this movie record using the browser you get 404 Not Found error as expected:
You can check as well from the database that the record is deleted:
For more information and to develop a website using ASP.NET, Hire .NET Developer from us as we give you a high-quality product by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”.
To develop a Website using ASP.NET, please visit our technology page.
Content Source:
Cron Jobs in node.js is used for scheduling scripts to run, They’re most commonly used for automating doing a task like scraping data from another website, deleting some data from the database, etc. However, there are many situations when a web application may need specific tasks to run periodically.
In this article, the points to cover are:
There are three popular modules that are sorted based on GitHub starts
But the number of downloads cron is more than other modules currently.
All three modules do a similar thing, but it’s preferred to use the cron module because there are more examples available in this repository examples
There are multiple tutorials for cron job patterns
Note that the cron job modules have 6 values but the crontab website has 5 values because the first start (second) is optional
The cron format consists of: ``` * * * * * * ┬ ┬ ┬ ┬ ┬ ┬ │ │ │ │ │ │ │ │ │ │ │ └ day of week (0 - 7) (0 or 7 is Sun) │ │ │ │ └───── month (1 - 12) │ │ │ └────────── day of month (1 - 31) │ │ └─────────────── hour (0 - 23) │ └──────────────────── minute (0 - 59) └───────────────────────── second (0 - 59, OPTIONAL)
1- At 04:05
2- At every 10th minute.
3- Every odd minute
4- At every minute from 10 through 20 on Sunday and Monday.
You know that Node.js is a single thread so using clustering in production, and running Node.js projects on multiple CPUs, to do this you can PM2.
If you use Cron job in Nest.js Framework or Express.js and use clustering so the cron job starts on multiple CPUs. Let’s review an example that shows you the problem.
Cronjob In Cluster Mode
>>> Problem
npm install cron
Run the project based on cluster mode with PM2, the cluster mode allows networked Node.js applications (HTTP(s)/TCP/UDP server) to be scaled across all CPUs available
pm2 start index.js -i max
pm2 logs
As you can see after 20 seconds the cron job has executed the desired function 8 times instead of 2 times, because the project clusters on 4 CPUs
>>> Solution
For solving this problem you should use NODE_APP_INSTANCE environment variable to run a cronjob only on one process,
The result
As you see, cronjob is running just on one process.
If you want to run in different environments with cluster mode and none cluster mode, with the NODE_ENV variable, handle this problem
NODE_ENV=local node index
With these conditions, the cron job runs in the local environment and runs in the prod environment with cluster mode.
When you start a cron job on the server, It runs based on the time server and timezone of the server. the best approach is using the timezone ‘Etc/UTC’ instead of the local timezone server.
>>> Problem
Imagine there is a website and this website publishes new data every day at 8:00 AM o’clock. Your script is running on the server, the problem is that some timezone clocks change (daylight saving time) so when the time of the server changes, The execution time of your script is also affected
>>> Solution
Check the local time from the UTC time
For more information and to develop web applications using Node JS, Hire Node Developer from us as we give you a high-quality product by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”.
To develop your custom web app using Node JS, please visit our technology page.
Content Source:
It is no news that Microsoft is working very hard on improving and bringing new features to the .NET framework or to its C# programming language. This time Microsoft is targeting web development and is focusing on ASP.NET Core, which will apparently come hand in hand with the .NET 7 version.
Some time ago Microsoft released Preview 1 of ASP.NET Core on .NET 7 and the amount of new features is great, so let’s see the new features!
The first of the new features will bring improvements in minimal APIs, especially in IFormFile and IFormCollection. With this new improvement you will be able to use IFormFileand IFormCollection to handle and manage file uploads in a much easier way.
Microsoft warns that if you want to use these new functions with authentication, anti-forgery is required, but so far Microsoft has not implemented such support. However, they reassure us that it is on the roadmap of .NET 7.
Support for such requests with client certificates or cookie headers is currently inactive. Let’s take a look at the example provided by Microsoft to see this new ASP.NET Core feature in action:
The next new enhancement for the minimal APIs in ASP.NET Core comes for Steam and PipeRider.
To understand in what kind of scenario these new minimal APIs would be used, let’s imagine that we need to store data in a blob storage or queue it in some queue provider like Azure. In this case we can use Steam and PipeRider to bind the body of a request and later process it in the cloud.
However, Microsoft warns us three details to achieve a correct functioning of these minimal APIs:
The last improvement in the minimal APIs that Microsoft brings in this Preview 1 is about JSON configuration. Through ConfigureRouteHandlerJsonOptions we will be able to manually configure the options and settings of the minimal API endpoints using JSON.
This improvement has been introduced mainly, as Microsoft says, to avoid confusion with Microsoft.AspNetCore.Mvc.JsonOptions.
This is the next new feature that ASP.NET Core will bring in .NET 7. This new source code generator for SignalR introduced by Microsoft has the ability to generate code (both sending and receiving) strongly typed based on developer-defined interfaces.
This especially applies to SignalR hub interfaces, there would be no need to use them in loosely-typed methods, we now have the option to reuse them in the client. At the same time, there is the possibility of implementing an interface that contains the methods and at the same time the client can take advantage of that interface to call any method that is part of the hub.
The good thing is that Microsoft has let us see the use and operation of this new SignalR generator. Let’s see how Microsoft uses it:
The next step is to add a static partial class and write the following static partial methods together with the attributes HubServerProxy and HubClientProxy in this way:And finally, we would use the partial methods and that’s it:That’s it, that’s how easy it is to use the new SignalR client source generator.
We continue with the next improvement brought by Microsoft. This time they have focused on improving the user experience of the checks and for this, they have implemented the nullable view for the checks that are made of null state of any ASP.NET Core application. In this case this is the example that Microsoft provides us:
Unfortunately there are no further examples or use cases for this new feature. Hopefully Microsoft will continue to release new features and talk more about the ones already revealed.
Thanks to this new ASP.NET feature you will be able to manually configure with “SystemTextJsonValidationMetadataProvider” the validation to use the JSON property names.
Previously as the names of the properties that a model has, commonly were implementation details, managing them from a single page application was difficult.
If you want to know more about this feature, I recommend you, as always, to consult the original source: Use JSON property names in validation errors
Injecting services into Blazor? attributes Yes, thanks to this improvement you will be able to inject almost any type of service into custom validation attributes. To do so, the ValidationContext will be configured by Blazor to be used as a service provider.
For more information and to develop a website using ASP.NET, Hire .NET Developer from us as we give you a high-quality product by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”.
To develop a Website using Vue.js, please visit our technology page.
Content Source:
57 Sherway St,
Stoney Creek, ON
L8J 0J3
606, Suvas Scala,
S P Ring Road, Nikol,
Ahmedabad 380049
1131 Baycrest Drive,
Wesley Chapel,
FL 33544
57 Sherway St,
Stoney Creek, ON
L8J 0J3
606, Suvas Scala,
S P Ring Road, Nikol,
Ahmedabad 380049
1131 Baycrest Drive,
Wesley Chapel,
FL 33544
© 2024 — HK Infosoft. All Rights Reserved.
© 2024 — HK Infosoft. All Rights Reserved.
T&C | Privacy Policy | Sitemap