Latest updates and features in .NET 7
NET 7 is the successor to .NET 6 and focuses on being unified, modern, simple, and fast. .NET 7 will be supported for 18 months as a standard-term support (STS) release. .NET 7 was released on November 8, 2022 and is the latest version of the .NET platform. It includes a number of new features and improvements, including:
- C# 11: C# 11 includes a number of new features, such as global using directives, file-scoped namespaces, and record structs.
- Performance improvements: .NET 7 includes a number of performance improvements, such as faster startup times and lower memory usage.
- New features for cloud-native development: .NET 7 includes a number of new features for cloud-native development, such as support for HTTP/3 and improvements to minimal APIs.
- New features for desktop development: .NET 7 includes a number of new features for desktop development, such as support for WinUI 3.1 and improvements to Windows Presentation Foundation (WPF).
- New features for mobile development: .NET 7 includes a number of new features for mobile development, such as support for .NET MAUI and improvements to Xamarin.Forms.
C# 11 features in .NET 7
C# 11 includes a number of new features, such as:
- Global using directives: Global using directives allow you to import namespaces at the top of your code file, so that you don’t have to import them in each class.
- File-scoped namespaces: File-scoped namespaces allow you to define namespaces that are only accessible within the current file.
- Record structs: Record structs are a new type of struct that is specifically designed for storing data. They are similar to regular structs, but they have some additional features, such as automatic property generation and initialization.
- Init-only setters: Init-only setters allow you to set properties only once, during object creation or initialization.
- Required properties: Required properties must be set before the object is used.
- Asynchronous streams: Asynchronous streams allow you to read and write data asynchronously, without blocking the thread.
- Improvements to generic math: Generic math has been improved in C# 11, making it easier to write generic math code.
- New APIs for pattern matching: C# 11 includes new APIs for pattern matching, making it more powerful and easier to use.
Performance improvements in .NET 7
.NET 7 includes a number of performance improvements, such as:
- Faster startup times: .NET 7 has faster startup times than previous versions of .NET. This is due to a number of improvements, such as precompiling more of the runtime and using a new JIT compiler.
- Lower memory usage: .NET 7 uses less memory than previous versions of .NET. This is due to a number of improvements, such as reducing the size of the runtime and using a new garbage collector.
- Improved performance of regular expressions: The regular expression library in .NET 7 has been improved, resulting in significant performance gains for many regular expressions.
- Improved performance of ASP.NET Core: ASP.NET Core in .NET 7 has a number of performance improvements, such as faster routing and caching.
New features for cloud-native development in .NET 7
.NET 7 includes a number of new features for cloud-native development, such as:
- Support for HTTP/3: .NET 7 supports HTTP/3, the latest version of the HTTP protocol. HTTP/3 offers a number of benefits over HTTP/2, such as improved performance and reliability.
- Improvements to minimal APIs: Minimal APIs in .NET 7 have been improved, making them easier to use and more powerful. For example, minimal APIs now support file uploads and rate limiting.
- Improvements to container support: .NET 7 includes a number of improvements to container support, such as the ability to publish directly to containers and to use central package management with NuGet.
New features for desktop development in .NET 7
.NET 7 includes a number of new features for desktop development, such as:
- Support for WinUI 3.1: .NET 7 supports WinUI 3.1, the latest version of the Windows UI platform. WinUI 3.1 includes a number of new features, such as support for Fluent Design and XAML islands.
- Improvements to Windows Presentation Foundation (WPF): WPF in .NET 7 has a number of improvements, such as support for dark mode and variable fonts.
New features for mobile development in .NET 7
.NET 7 includes a number of new features for mobile development, such as:
- Support for .NET MAUI: .NET MAUI is a new cross-platform UI framework that allows you to build native mobile and desktop applications from a single codebase. .NET MAUI is still in preview, but it is expected to be released in the near future.
- Improvements to Xamarin.Forms: Xamarin.Forms in .NET 7 has a number of improvements, such as support for new platform features and performance enhancements.
For more information, please head over to our Hire .NET Developer page and to develop a website using ASP.NET, Hire .NET Developer at HK Infosoft – we are destined to provide you with an innovative solution using the latest technology stacks. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”.
Content Source:
- DotNet Official Documentation
No-code App Development Using FlutterFlow
Are you looking to revolutionize your app development workflow?
Look no further than FlutterFlow – an innovative visual development platform that’s taking app development to the next level with its intuitive Drag & Drop functionality. In this blog post, we’ll dive into what FlutterFlow is, its key features, and how you can use it to rapidly bring your app ideas to life.
What is FlutterFlow?
At its core, FlutterFlow is a powerful visual development platform designed to simplify and accelerate the process of building mobile and web applications. It’s built on top of Google’s Flutter framework, which is renowned for its ability to create high-quality, natively compiled applications for mobile, web, and desktop from a single codebase.
FlutterFlow takes the power of Flutter and adds an intuitive drag-and-drop interface, making it accessible to developers of all skill levels. Whether you’re a seasoned developer or just starting out, you can leverage FlutterFlow to design, develop, and deploy apps without the need for extensive coding knowledge.
Key Features of FlutterFlow
1. Visual Interface: FlutterFlow offers a visual interface that lets you design your app’s user interface by simply dragging and dropping elements onto the canvas. This speeds up the design process and ensures a pixel-perfect result.
2. Widgets Gallery: The platform provides a wide range of pre-built widgets that you can customize to match your app’s design. From buttons and text fields to complex interactive components, FlutterFlow has you covered.
3. Responsive Design: Building apps that look great on various devices is essential. FlutterFlow enables you to create responsive layouts that adapt seamlessly to different screen sizes and orientations.
4. Data Binding: Connect your app’s UI to data sources easily. FlutterFlow allows you to bind data from APIs or databases to your UI components, keeping your app’s content up-to-date.
5. Collaboration: Collaborate with team members in real-time. Whether you’re a solo developer or part of a team, FlutterFlow supports seamless collaboration, enhancing productivity.
Getting Started with FlutterFlow
1. Sign Up: To get started, sign up for a FlutterFlow account. You can choose from different pricing plans based on your needs.
2. Create a New Project: Once you’re in, create a new project and choose whether you’re building a mobile or web app.
3. Design Your App: Use the visual editor to design your app’s UI. Drag and drop widgets onto the canvas, arrange them, and customize their properties.
4. Add Functionality: Use the built-in logic builder to add functionality to your app. Define interactions, create conditional statements, and more – all without writing extensive code.
5. Preview and Test: Before deploying your app, use the preview feature to test how it looks and functions. Make adjustments as needed to ensure a seamless user experience.
6. Deploy Your App: Once you’re satisfied with your app, deploy it with just a few clicks. FlutterFlow takes care of the technical details, allowing you to focus on delivering an exceptional app.
Conclusion
In the fast-paced world of app development, tools like FlutterFlow are a game-changer. They empower developers to create robust and visually stunning applications without the worry of traditional coding methods. By harnessing the power of Google’s Flutter framework and adding a user-friendly visual interface, FlutterFlow is designed to redefine how we approach app development.
So whether you’re a startup founder looking to build a prototype, a developer aiming to speed up your workflow, or someone passionate about turning your app ideas into reality, FlutterFlow has everything you need to succeed.
For more information, please head over to our Hire Flutter Developer page and to develop a custom mobile application using Flutter, Hire Flutter Developer at HK Infosoft – we are destined to provide you with an innovative solution using the latest technology stacks. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”.
Content Source:
- flutterflow.io
What is Fastify?
In the rapidly evolving world of web development, developers are constantly on the lookout for frameworks that can provide both speed and efficiency. Enter Fastify, a lightweight and lightning-fast web framework for Node.js that has taken the development community by storm. If you’re a developer looking to create high-performance, scalable, and secure web applications, Fastify may be the game-changer you’ve been waiting for.
What is Fastify?
Fastify, developed by Matteo Collina and Tomas Della Vedova, is an open-source web framework for Node.js designed with a primary focus on speed and low overhead. Launched in 2016, Fastify has quickly gained popularity in the Node.js ecosystem due to its impressive performance, simplicity, and extensibility. It is built on top of Node.js’s HTTP module and takes full advantage of the latest JavaScript features to maximize its speed and efficiency.
Getting started with Fastify:
- npm init
- npm i fastify
- Required minimum node.js version of node 14.
- In express js return JSON data using res.json({ hello: “world” }) function but fastify no need to json function.
// Require the framework and instantiate it const fastify = require("fastify")({ logger: true }); // Declare a rout fastify.get("/", async (request, reply) => { return { hello: "world" }; }); // Start the server fastify.listen(3000);
Fastify comes with amazing set of features which will give boost to your project:
The Need for Speed
One of the primary reasons developers are flocking to Fastify is its exceptional performance. Thanks to its powerful and highly optimized core, Fastify boasts some of the fastest request/response times among Node.js frameworks. It leverages features like request validation, which is automatically generated from JSON schemas, to ensure that data is processed swiftly and accurately. Additionally, Fastify supports asynchronous programming and handles requests concurrently, making it ideal for handling heavy workloads and high traffic.
Minimalism and Extensibility
Fastify follows a minimalist approach, focusing on providing only the essential components needed to build web applications efficiently. Developers can opt-in to use various plugins to extend Fastify’s functionality as per their requirements. This approach not only keeps the core lightweight but also gives developers the flexibility to customize their stack with the specific tools they need. Furthermore, the ecosystem around Fastify is growing rapidly, with a wide array of plugins and middleware available, making it easy to integrate third-party tools seamlessly.
Developer-Friendly API
Fastify’s API is designed to be intuitive and easy to use, reducing the learning curve for developers. Its well-documented and expressive API allows developers to write clean, maintainable, and organized code. The framework’s emphasis on proper error handling and logging also contributes to its ease of use, helping developers quickly identify and rectify issues during development and production.
JSON Schema-Based Validation
Data validation is a crucial aspect of web application development to ensure data integrity and security. Fastify utilizes JSON Schema for data validation, enabling developers to define the expected shape of incoming requests and responses. This not only simplifies the validation process but also automatically generates detailed and helpful error messages, making debugging a breeze.
Enhanced Security
Fastify is designed with security in mind. It encourages best practices such as using the latest cryptographic libraries and secure authentication mechanisms. Additionally, Fastify has a built-in protection mechanism against common web application attacks like Cross-Site Scripting (XSS) and Cross-Site Request Forgery (CSRF). With Fastify, developers can rest assured that their applications are less prone to security vulnerabilities.
Conclusion
Fastify’s emergence as a top-tier web framework for Node.js is no coincidence. Its commitment to speed, minimalism, and extensibility sets it apart from the competition. Whether you’re building a small-scale API or a large-scale application, Fastify’s performance, easy-to-use API, and emphasis on security make it an excellent choice.
In the fast-paced world of web development, having a framework that can boost productivity and deliver top-notch performance is essential. Fastify has proven itself as a reliable and efficient framework, providing developers with the tools they need to create high-performance applications without compromising on code quality and security.
So, if you’re ready to take your Node.js projects to the next level, give Fastify a try, and experience the speed and power it brings to your development workflow.
For more information and to develop web applications using Node.js, Hire Node.js Developer from us as we give you a high-quality product by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”.
To develop custom web apps using Node.js, please visit our technology page.
Content Source:
- fastify.dev
What is new in Laravel 10?
This transition is intended to ease the maintenance burden on the community and challenge our development team to ship amazing, powerful new features without introducing breaking changes. Therefore, we have shipped a variety of robust features to Laravel 9 without breaking backwards compatibility.
Therefore, this commitment to ship great new features during the current release will likely lead to future “major” releases being primarily used for “maintenance” tasks such as upgrading upstream dependencies, which can be seen in these release notes.
Laravel 10 continues the improvements made in Laravel 9.x by introducing argument and return types to all application skeleton methods, as well as all stub files used to generate classes throughout the framework. In addition, a new, developer-friendly abstraction layer has been introduced for starting and interacting with external processes.
PHP 8.1:
PHP 8.1 is the minimum-required PHP version in Laravel 10. Some PHP 8.1 features, such as readonly properties and array_is_list, are used in Laravel 10.
Laravel Official Packages Upgrade
Not only is the framework professionally maintained and updated on a regular basis, but so are all of the official packages and the ecosystem.
The following is a list of the most recent official Laravel packages that have been updated to support Laravel 10:
- Breeze
- Cashier Stripe
- Dusk
- Horizon
- Installer
- Jetstream
- Passport
- Pint
- Sail
- Scout
- Valet
Predis Version Upgrade
Predis is a robust Redis client for PHP that may help you get the most out of caching to provide a fantastic user experience. Laravel formerly supported both versions 1 and 2, but as of Laravel 10, the framework no longer supports Predis 1.
Although Laravel documentation mentions Predis as the package for interacting with Redis, you may also use the official PHP extension. This extension provides an API for communicating with Redis servers.
All Validation Rules Invokable by Default
If you were to make an invokable validation rule in Laravel 9, you would need to add an –invokable flag after the Artisan command. This is no longer necessary because all Laravel 10 rules are invokable by default. So, you may run the following command to create a new invokable rule in Laravel 10:
php artisan make:rule CustomRule
Types
On its initial release, Laravel utilized all of the type-hinting features available in PHP at the time. However, many new features have been added to PHP in the subsequent years, including additional primitive type-hints, return types, and union types.
Laravel 10.x thoroughly updates the application skeleton and all stubs utilized by the framework to introduce argument and return types to all method signatures. In addition, extraneous “doc block” type-hint information has been deleted.
This change is entirely backwards compatible with existing applications. Therefore, existing applications that do not have these type-hints will continue to function normally.
Test Profiling
The Artisan test command has received a new –profile option that allows you to easily identify the slowest tests in your application:
php artisan test --profile
Generator CLI Prompts
To improve the framework’s developer experience, all of Laravel’s built-in make commands no longer require any input. If the commands are invoked without input, you will be prompted for the required arguments:
php artisan make:controller
String Password Helper Function
Laravel 10 can create a random and secure password with a given length:
$password = Str::password(12);
Laravel Pennant
A new first-party package, Laravel Pennant, has been released. Laravel Pennant offers a light-weight, streamlined approach to managing your application’s feature flags. Out of the box, Pennant includes an in-memory array driver and a database driver for persistent feature storage.
Features can be easily defined via the Feature::define method:
use Laravel\Pennant\Feature; use Illuminate\Support\Lottery; Feature::define('new-onboarding-flow', function () { return Lottery::odds(1, 10); });
Once a feature has been defined, you may easily determine if the current user has access to the given feature:
if (Feature::active('new-onboarding-flow')) { // ... }
Of course, for convenience, Blade directives are also available:
@feature('new-onboarding-flow') <div> <!-- ... --> </div> @endfeature
Process Interaction
Laravel 10.x introduces a beautiful abstraction layer for starting and interacting with external processes via a new Process facade:
use Illuminate\Support\Facades\Process; $result = Process::run('ls -la'); return $result->output();
Processes may even be started in pools, allowing for the convenient execution and management of concurrent processes:
use Illuminate\Process\Pool; use Illuminate\Support\Facades\Process; [$first, $second, $third] = Process::concurrently(function (Pool $pool) { $pool->command('cat first.txt'); $pool->command('cat second.txt'); $pool->command('cat third.txt'); }); return $first->output();
Horizon / Telescope Facelift
Horizon and Telescope have been updated with a fresh, modern look including improved typography, spacing, and design
Pest Scaffolding
Pest test scaffolding is now enabled by default when creating new Laravel projects. To enable this feature, use the –pest flag when building a new app with the Laravel installer:
laravel new example-application --pest
Conclusion
Laravel 10 is a significant release for the Laravel framework, and it comes with several new features and improvements that will help developers create more robust and efficient web applications. The introduction of BladeX, model event hooks, and improved routing features make it easier for developers to build modular and scalable applications. The dropping of PHP 8.0 support is also a significant decision that ensures that developers are using the latest version of PHP, which is more secure and efficient. As always, Laravel continues to evolve and innovate, making it an excellent choice for web development projects.
For more information and to develop web applications using Laravel, Hire Laravel Developer from us as we give you a high-quality product by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”.
To develop custom web apps using Laravel, please visit our technology page.
Content Source:
- laravel.com
What is new in Node.js 20?
Unleashing the Power of Node 20: Embracing the Future of JavaScript Development
In the realm of server-side JavaScript, Node.js has become a dominant force, revolutionizing the way we build web applications. With each new version, Node.js brings forth exciting enhancements, improved performance, and expanded capabilities. In this blog, we’ll embark on a journey through the evolution of Node.js, exploring the advancements that have led to the highly anticipated Node 20. We’ll delve into the key features of Node 20 and showcase an example that demonstrates its potential.
From Past to Present: The Evolution of Node.js
Since its initial release in 2009, Node.js has evolved significantly, shaping the landscape of JavaScript development. The first versions of Node.js introduced a non-blocking, event-driven architecture, enabling developers to build highly scalable and efficient applications. With its growing popularity, Node.js gained a vibrant ecosystem of modules and libraries, making it a versatile platform for both back-end and full-stack development.
As Node.js progressed, new features were introduced to enhance performance, security, and developer productivity. For instance, Node.js 8 introduced the Long-Term Support (LTS) release, which provided stability and backward compatibility. Node.js 10 brought improvements in error handling and diagnostic reports, making it easier to identify and resolve issues. Node.js 12 introduced enhanced default heap limits and improved performance metrics.
Introducing Node 20: A Leap Forward in JavaScript Development
Now, let’s turn our attention to Node 20, the latest iteration of Node.js, and explore its groundbreaking features that are set to shape the future of JavaScript development.
– Improved Performance and Speed:
– Enhanced Security:
– Improved Debugging Capabilities:
– ECMAScript Modules (ESM) Support:
– Enhanced Worker Threads:
– Stable Test Runner
– url.parse() Warns URLs With Ports That Are Not Numbers
Let’s explore what they are and how to use them.
1. Improved Performance and Speed
– Node.js 20 incorporates the latest advancements in the V8 JavaScript engine, resulting in significant performance improvements. Let’s take a look at an example:
// File: server.js const http = require('http'); const server = http.createServer((req, res) =&amp;gt; { res.writeHead(200, { 'Content-Type': 'text/plain' }); res.end('Hello, world!'); }); server.listen(3000, () =&amp;gt; { console.log('Server running on port 3000'); });
By leveraging the performance optimizations in Node.js 20, applications like the one above experience reduced response times and enhanced scalability, resulting in an improved user experience.
2. Enhanced Security
Security is a top priority for any application, and Node.js 20 introduces several features to bolster its security. One noteworthy enhancement is the upgraded TLS implementation, ensuring secure communication between servers and clients. Here’s an example of using TLS in Node.js 20:
// File: server.js const https = require('https'); const fs = require('fs'); const options = { key: fs.readFileSync('private.key'), cert: fs.readFileSync('certificate.crt') }; const server = https.createServer(options, (req, res) =&amp;gt; { res.writeHead(200, { 'Content-Type': 'text/plain' }); res.end('Secure Hello, world!'); }); server.listen(3000, () =&amp;gt; { console.log('Server running on port 3000'); });
With the upgraded TLS implementation, Node.js 20 ensures secure data transmission, safeguarding sensitive information.
3. Improved Debugging Capabilities
Node.js 20 introduces enhanced diagnostic and debugging capabilities, empowering developers to pinpoint and resolve issues more effectively. Consider the following example:
// File: server.js const { performance, PerformanceObserver } = require('perf_hooks'); const obs = new PerformanceObserver((items) =&amp;gt; { console.log(items.getEntries()[0].duration); performance.clearMarks(); }); obs.observe({ entryTypes: ['measure'] }); performance.mark('start'); // ... Code to be measured ... performance.mark('end'); performance.measure('Duration', 'start', 'end');
In this example, the Performance API allows developers to measure the execution time of specific code sections, enabling efficient optimization and debugging.
4. ECMAScript Modules (ESM) Support
Node.js 20 embraces ECMAScript Modules (ESM), providing a standardized approach to organize and reuse JavaScript code. Let’s take a look at an example:
// File: module.js export function greet(name) { return `Hello, ${name}!`; } // File: app.js import { greet } from './module.js'; console.log(greet('John'));
With ESM support, developers can now leverage the benefits of code encapsulation and organization in Node.js, facilitating better code reuse and maintenance.
5. Enhanced Worker Threads
Node.js 20 introduces improved worker threads, enabling true multi-threading capabilities within a Node.js application. Consider the following example:
// File: worker.js const { Worker, isMainThread, parentPort, workerData } = require('worker_threads'); if (isMainThread) { const worker = new Worker(__filename, { workerData: 'Hello, worker!' }); worker.on('message', message =&amp;gt; console.log(message)); } else { parentPort.postMessage(workerData); }
In this example, the main thread creates a worker thread that receives data and sends a message back. With enhanced worker threads, Node.js 20 empowers developers to harness the full potential of multi-core processors, improving application performance.
6. Stable Test Runner
Node.js 20 includes an important change to the test_runner module. The module has been marked as stable after a recent update. Previously, the test_runner module was experimental, but this change marks it as a stable module ready for production use.
7. url.parse() Warns URLs With Ports That Are Not Numbers
url.parse() accepts URLs with ports that are not numbers. This behavior might result in hostname spoofing with unexpected input. These URLs will throw an error in future versions of Node.js, as the WHATWG URL API already does. Starting with Node.js 20, these URLS cause url.parse() to emit a warning.
Here is urlParse.js:
const url = require('node:url'); url.parse('https://example.com:80/some/path?pageNumber=5'); // no warning url.parse('https://example.com:abc/some/path?pageNumber=5'); // show warning
Execute node urlParse.js . https://example.com:80/some/path?pageNumber=5 with a numerical port does not show a warning, but https://example.com:abc/some/path?pageNumber=5 with a string port shows a warning.
% node urlParse.js (node:21534) [DEP0170] DeprecationWarning: The URL https://example.com:abc/some/path?pageNumber=5 is invalid. Future versions of Node.js will throw an error. (Use `node --trace-deprecation ...` to show where the warning was created)
Conclusion
Node.js 20 brings a plethora of innovative features and enhancements that revolutionize the way developers build applications. Improved performance, enhanced security, advanced debugging capabilities, ECMAScript Modules support, and enhanced worker threads open up new possibilities for creating scalable, secure, and high-performing applications. By leveraging these cutting-edge features, developers can stay at the forefront of modern web development and deliver exceptional user experiences. Upgrade to Node.js 20 today and unlock a new era of JavaScript development!
For more information and to develop web applications using Node JS, Hire Node Developer from us as we give you a high-quality product by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”.
To develop your custom web app using Node JS, please visit our technology page.
Content Source:
- medium.com
What is Pair Programming?
Two Minds, One Goal: The Power of Pair Programming
Pair programming is an agile software development technique in which two developers work together on the same computer to write code. One developer, called the driver, is responsible for typing the code while the other, called the observer, review the code and makes suggestions. The two developers then switch roles, with the observer becoming the driver and the driver becoming the observer.
The story of pair programming originates from a software development company called Chrysler. In the late 1990s, Chrysler was tasked with developing a new software system for its payroll operations. The project was behind schedule and the team was struggling to make progress. In desperation, the lead programmer, Xitij Kothi, suggested an unconventional solution: pair programming. He proposed that two programmers work together on each task, with one writing code while the other reviews and offers suggestions.
The strategy worked. The project was completed on time and the quality of the software was far superior to what had been expected. Since then, pair programming has become a widely accepted practice in software development.
Pair programming can reduce the time it takes to complete a task by allowing two developers to work together more effectively. By having two developers work side-by-side, they can quickly identify and resolve problems as they occur. The navigator can also provide insights into potential issues before they become problems, helping to reduce the amount of debugging and testing that needs to be done. Additionally, pair programming enables developers to learn from each other, which can help reduce the amount of time it takes to complete a task.
Pair programming also helps to spread knowledge and skills among the team members, as each programmer learns from the other. This can be especially useful in distributed software projects, where team members may be geographically dispersed.
Story Time
John and Jane had been classmates for the past couple of months, but never really got to know each other very well. They had been assigned a project in their programming class and had decided to work together.
John was very knowledgeable in the subject and had a good understanding of what was expected. Jane, on the other hand, had little to no experience with programming but was eager to learn. Together, they decided to use the pair programming technique.
John and Jane sat down together and discussed their project. John explained the concept of pair programming, how it works, and what their roles would be. Jane was very willing to learn and was excited to get started.
John and Jane began writing the code for their project. John took the lead and wrote most of the code while Jane watched and asked questions. Whenever Jane found a mistake or had an idea, she would suggest it to John, who would then incorporate it into the code.
In this way, John and Jane worked together to complete their project. By the end of it, both of them had gained a better understanding of programming and had come to know each other better.
“Pair programming had become a regular part of John and Jane’s programming class ever since.”
Issues we can face during Pair Programming and its Solutions
One type of problem that programmers can face during pair programming is communication breakdown. This is when two programmers are unable to effectively communicate with each other due to a lack of understanding or different working styles. To overcome this issue, it is important to ensure that both programmers are on the same page before starting to program. This can be done by discussing the problem, breaking it down into small manageable parts, and discussing how to approach it. It is also important to ensure that there is an equal amount of speaking and listening between the two programmers. It is also important to be aware of any language or cultural differences that may be present and to adjust your communication accordingly.
Different abilities can be a problem during pair programming because one programmer may be more knowledgeable or experienced in a certain area than the other. This can lead to the more experienced programmer taking on a larger share of the work, leaving the less experienced programmer feeling frustrated or left out.
To overcome this, the two partners should agree on a division of tasks based on their respective strengths. For example, if one partner is more experienced with databases, they can take the lead on designing and setting up the database structure, while the other partner focuses on developing the business logic. Both partners should aim to challenge each other and provide feedback and support to ensure that the project is completed to the highest standard. Additionally, the partners should regularly rotate tasks so that both have the opportunity to learn from each other and gain a more comprehensive understanding of the project.
Task division can be a problem during pair programming if one person takes on more responsibility than the other, leaving the other feeling like they’re not contributing as much to the project. This can lead to frustration and resentment, and can ultimately undermine the collaborative nature of pair programming. To overcome this, the two people should agree on how they will divide the tasks before they begin.
For example, they can decide that one person will take the lead on the coding while the other focuses on debugging and testing, or that each person will take turns writing sections of code. They should also make sure to have regular check-ins to discuss progress, address any issues, and ensure that both parties are on the same page. This will help ensure that both parties feel like they’re contributing equally to the project.
Distractions can be a major problem during pair programming, as they can prevent the pair from focusing on their task and completing it in a timely manner. Distractions could include anything from phones, emails, instant messages, or conversations with other people in the same room. To overcome distractions during pair programming, both partners should agree to have their phones on silent and out of sight. If possible, they should also try to find a quiet space where they can work without interruption. Additionally, they should set aside specific times to check emails and other messages and then get back to their task. They should also set specific goals and deadlines for the task ahead of time to help keep them on track.
one person may be more accustomed to writing code with fewer lines and using terse abbreviations, while the other person may prefer longer and more descriptive lines of code. This can lead to a lot of back and forth, which can slow down the programming process. To overcome this problem, it is important to have open communication and be respectful of each other’s coding styles. The pair should discuss their preferences and try to come to an agreement on which style they should use. They should also discuss the types of coding conventions they want to follow and agree on a coding style guide. Additionally, they should take time to explain their code to each other, as this can help them better understand each other’s coding styles. Finally, they should be open to feedback and criticism and be willing to compromise.
Conclusion
The importance of pair programming lies in its ability to improve the quality of code while also reducing the time it takes to develop software.
Apple introduces Advanced Security features
Apple introduced three advanced security features focused on protecting against threats to user data in the cloud, representing the next step in its ongoing effort to provide users with even stronger ways to protect their data. With iMessage Contact Key Verification, users can verify they are communicating only with whom they intend. With Security Keys for Apple ID, users have the choice to require a physical security key to sign in to their Apple ID account. And with Advanced Data Protection for iCloud, which uses end-to-end encryption to provide Apple’s highest level of cloud data security, users have the choice to further protect important iCloud data, including iCloud Backup, Photos, Notes, and more.
As threats to user data become increasingly sophisticated and complex, these new features join a suite of other protections that make Apple products the most secure on the market: from the security built directly into our custom chips with best-in-class device encryption and data protections, to features like Lockdown Mode, which offers an extreme, optional level of security for users such as journalists, human rights activists, and diplomats. Apple is committed to strengthening both device and cloud security, and to adding new protections over time.
iMessage Contact Key Verification
Apple pioneered the use of end-to-end encryption in consumer communication services with the launch of iMessage, so that messages could only be read by the sender and recipients. FaceTime has also used encryption since launch to keep conversations private and secure. Now with iMessage Contact Key Verification, users who face extraordinary digital threats — such as journalists, human rights activists, and members of government — can choose to further verify that they are messaging only with the people they intend. The vast majority of users will never be targeted by highly sophisticated cyberattacks, but the feature provides an important additional layer of security for those who might be. Conversations between users who have enabled iMessage Contact Key Verification receive automatic alerts if an exceptionally advanced adversary, such as a state-sponsored attacker, were ever to succeed breaching cloud servers and inserting their own device to eavesdrop on these encrypted communications. And for even higher security, iMessage Contact Key Verification users can compare a Contact Verification Code in person, on FaceTime, or through another secure call.

Pic courtesy: apple.com
Security Keys
Apple introduced two-factor authentication for Apple ID in 2015. Today, with more than 95 percent of active iCloud accounts using this protection, it is the most widely used two-factor account security system in the world that we’re aware of. Now with Security Keys, users will have the choice to make use of third-party hardware security keys to enhance this protection. This feature is designed for users who, often due to their public profile, face concerted threats to their online accounts, such as celebrities, journalists, and members of government. For users who opt in, Security Keys strengthens Apple’s two-factor authentication by requiring a hardware security key as one of the two factors. This takes two-factor authentication even further, preventing even an advanced attacker from obtaining a user’s second factor in a phishing scam.

Pic courtesy: apple.com
Advanced Data Protection for iCloud
For years, Apple has offered industry-leading data security on its devices with Data Protection, the sophisticated file encryption system built into iPhone, iPad, and Mac. “Apple makes the most secure mobile devices on the market. And now, we are building on that powerful foundation,” said Ivan Krstić, Apple’s head of Security Engineering and Architecture. “Advanced Data Protection is Apple’s highest level of cloud data security, giving users the choice to protect the vast majority of their most sensitive iCloud data with end-to-end encryption so that it can only be decrypted on their trusted devices.” For users who opt in, Advanced Data Protection keeps most iCloud data protected even in the case of a data breach in the cloud.
iCloud already protects 14 sensitive data categories using end-to-end encryption by default, including passwords in iCloud Keychain and Health data. For users who enable Advanced Data Protection, the total number of data categories protected using end-to-end encryption rises to 23, including iCloud Backup, Notes, and Photos. The only major iCloud data categories that are not covered are iCloud Mail, Contacts, and Calendar because of the need to interoperate with the global email, contacts, and calendar systems.
Enhanced security for users’ data in the cloud is more urgently needed than ever before, as demonstrated in a new summary of data breach research, “The Rising Threat to Consumer Data in the Cloud,” published today. Experts say the total number of data breaches more than tripled between 2013 and 2021, exposing 1.1 billion personal records across the globe in 2021 alone. Increasingly, companies across the technology industry are addressing this growing threat by implementing end-to-end encryption in their offerings.

Pic courtesy: apple.com
For more information and to develop iOS Mobile Apps, Hire iOS Developer from us as we give you a high-quality product by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”.
To develop iOS Mobile Apps, please visit our technology page.
Content Source:
- apple.com
New Features in Next.js 13
Next.js 13 has landed in a somewhat confusing way. Many remarkable things have been added; however, a good part is still Beta. Nevertheless, the Beta features give us important signals on how the future of Next.js will be shaped, so there are good reasons to keep a close eye on them, even if you’re going to wait to adopt them.
This article is part of a series of experiences about the Beta features. Let’s play with Server Components today.
Making server components the default option is arguably the boldest change made in Next.js 13. The goal of server component is to reduce the size of JS shipped to the client by keeping component code only on the server side. I.e., the rendering happens and only happens on the server side, even if the loading of the component is triggered on the client side (via client-side routing). It’s quite a big paradigm shift.
It feels quite “reasearch-y” by then, so it was shocking when seeing that Next.js is already betting its future on it now. Time flies and the fantastic engineers from React must have done some really great work.
npx create-next-app@latest --experimental-app --ts --eslint next13-server-components
Let’s have some fun playing with the project.
Server Component
The first difference noticed is that a new app folder now sits along with old friend page. They’ll save the routing changes to another article, but what’s worth mentioning for now is that every component under the app folder is, by default, a server component, meaning that it’s rendered on the server side, and its code stays on the server side.
Let’s create our very first server component now:
// app/server/page.tsx export default function Server() { console.log('Server page rendering: this should only be printed on the server'); return ( <div> <h1>Server Page</h1> <p>My secret key: {process.env.MY_SECRET_ENV}</p> </div> ); }
If you access the /server route, whether by a fresh browser load or client-side routing, you’ll only see the line of log printed in your server console but never in the browser console. The environment variable value is fetched from the server side as well.
Looking at network traffic in the browser, you’ll see the content of the Server component is loaded via a remote call which returns an octet stream of JSON data of the render result:

Pic courtesy: medium.com
{ ... "childProp": { "current": [ [ "$", "div", null, { "children": [ ["$", "h1", null, { "children": "Server Page" }], [ "$", "p", null, { "children": ["My secret key: ", "abc123"] } ] ] } ] ] } }
Rendering a server component is literally an API call to get serialized virtual DOM and then materialize it in the browser.
The most important thing to remember is that server components are for rendering non-interactive content, so there are no event handlers, no React hooks, and no browser-only APIs.
The most significant benefit is you can freely access any backend resource and secrets in server components. It’s safer (data don’t leak) and faster (code doesn’t leak).
Client Component
To make a client component, you’ll need to mark it so explicitly with use client:
// app/client/page.tsx 'use client'; import { useEffect } from 'react'; export default function Client() { console.log( 'Client page rendering: this should only be printed on the server during ssr, and client when routing' ); useEffect(() => { console.log('Client component rendered'); }); return ( <div> <h1>Client Page</h1> {/* Uncommenting this will result in an error complaining about inconsistent rendering between client and server, which is very true */} {/* <p>My secret env: {process.env.MY_SECRET_ENV}</p> */} </div> ); }
As you may already anticipate, this gives you a similar behavior to the previous Next.js versions.
When the page is first loaded, it’s rendered by SSR, so you should see the first log in the server console; during client-side routing, both log messages will appear in the browser console.
Mix and Match
One of the biggest differences between Server Component and SSR is that SSR is at page level, while Server Component, as its name says, is at component level. This means you can mix and match server and client components in a render tree as you wish.
// A server page containing client component and nested server component // app/mixmatch/page.tsx import Client from './client'; import NestedServer from './nested-server'; export default function MixMatchPage() { console.log('MixMatchPage rendering'); return ( <div> <h1>Server Page</h1> <div className="box"> <Client message="A message from server"> <NestedServer /> </Client> </div> </div> ); }
// app/mixmatch/client.tsx 'use client'; import { useEffect } from 'react'; export default function Client({ message, children, }: { message: string; children: React.ReactNode; }) { console.log('Client component rendering'); return ( <div> <h2>Client Child</h2> <p>Message from parent: {message}</p> <div className="box-red">{children}</div> </div> ); }
// app/mixmatch/nested-server.tsx export default function NestedServer() { console.log('Nested server component rendering'); return ( <div> <h3>Nested Server</h3> <p>Nested server content</p> </div> ); }

Pic courtesy: medium.com
In a mixed scenario like this, server and client components get rendered independently, and the results are assembled by React runtime. Props passed from server components to client ones are serialized across the network (and need to be serializable).
Server Components Can Degenerate
One caution you need to take is that if a server component is directly imported into a client one, it silently degenerates into a client component.
Let’s revise the previous example slightly to observe it:
// app/degenerate/page.tsx import Client from './client'; export default function DegeneratePage() { console.log('Degenerated page rendering'); return ( <div> <h1>Degenerated Page</h1> <div className="box-blue"> <Client message="A message from server" /> </div> </div> ); }
// app/degenerate/client.tsx 'use client'; import NestedServer from './nested-server'; export default function Client({ message }: { message: string }) { console.log('Client component rendering'); return ( <div> <h2>Client Child</h2> <p>Message from parent: {message}</p> <div className="box-blue"> <NestedServer /> </div> </div> ); }
// app/degenerated/nested-server.tsx export default function NestedServer() { console.log('Nested server component rendering'); return ( <div> <h3>Degenerated Server</h3> <p>Degenerated server content</p> </div> ); }
If you check out the log, you’ll see NestedServer has “degenerated” and is now rendered by the browser.
For more information and to develop web applications using React JS, Hire React Developer from us as we give you a high-quality product by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”.
To develop custom web apps using React JS, please visit our technology page.
Content Source:
- medium.com
Top Web 3.0 Trends for 2023
The term “Web 3.0 or Web3” was coined in 2014 by Ethereum co-founder Gavin Wood, and the idea gained interest in 2021 from cryptocurrency enthusiasts and large tech companies.
Web3 is still evolving and being defined, as such, there is not an established and universally accepted definition. Yet, Packy McCormick, an investor who helped disseminate Web3, has defined it as “the internet owned by the builders and users, orchestrated with tokens.”
The concept of Web3 can be both puzzling and vague, and to help provide an understanding, here is a quick review of the evolution of the internet over the years:
Web 1.0 — The Static Web (around 1990–2005). It was made of read-only webpages that, by and large, lacked much in the way of interactive features. Content generation was limited, and information was hard to find.
Web 2.0 — The Dynamic Web ( from around 2004 ). Made of new software applications built on the web. Most of the value is generated from companies such as Google, Apple, Amazon, and Facebook.
The vision of Web3 tends to be a more democratic version of today’s online and digital works, where Web3platforms could give creators and users a way to monetize their activity and contributions. For example, PIXIE a crypto version of TikTok or Instagram rewards all social interactions with cryptocurrency PIX, which is called “social content mining.

Pic courtesy: medium.com
There are many different paths to the evolution of Web3, but industry opinion leaders often suggest the following characteristics to help define Web3:
- Semantic Web: Enhanced web technologies that allow users to create, and share a link material through search and analysis. The search and analysis capabilities with Web3 would focus more on understanding the meaning of words and the context behind them.
- Decentralized: Unlike Web 1.0 and Web 2.0, where governance and application were largely centralized (think about Facebook/Meta), Web3 will be decentralized, with all applications and services enabled by a distributed approach where there is not a central authority.
- 3D Graphics or Metaverse:Some tech experts refer to Web3 because of its potential to create a new level of immersion and interaction between the physical and virtual world or metaverse. Pioneering applications are being seen across all industries from gaming, health, real estate, and e-commerce.
- Artificial Intelligence: The combination of semantic capabilities and AI will allow significant improvements to understand a multitude of data and provide faster and more relevant results (e.g., climate prediction or human-based corrupt practices such as biased product reviews).
Other features of Web3 include Ubiquity (i.e., anywhere/everywhere), Blockchain (i.e., decentralized ledger), and edge computing.
Key trends of Web3 in 2023
As Web3 embraces these features, and continues to use blockchains, cryptocurrencies, and NFTs to give power back to the users in the form of ownership, we continue to see many companies supercharging their brands with this technology.
Here are some of the key Web3 trends to look out for in 2023.
Industry to increase emphasis on cybersecurity
Web3 offers several benefits for users, such as data ownership, transparency, and fewer intermediaries, but it raises concerns with novel security threats. Some examples include smart contract logic hacks, crypto-jacking, rug pulls, and ice phishing.
Along with cryptocurrencies, NFTs have also become an increasingly popular target for scammers.

Pic courtesy: medium.com
With the growing concerns in the field, a number of start-ups are growing on developing security, data, monitoring, and storage solutions for Web3, and this trend is observed by a growing number of industry investments:
- Immunefi: a bug bounty and security services platform for DeFi has raised $24 million as part of its Series A.
- CertiK: a leadingWeb3 security company raised $88 million earlier this year in its Series B3 financing round.
- Halborn: a cybersecurity firm serving both traditional finance and blockchain-based clients raised $90 million in its Series A financing round.
More investments and emphasis to make the Web3 experiences as secure as possible will help reduce scams and also make companies more comfortable in investing in Web3-related projects.
Market growth will be driven by Metaverse investments and M&A deals
The Web3 space will continue to attract investments driven by two forces: metaverse-related projects and metaverse mergers and acquisitions deals.
There are several reasons which drive interest from both investors and brands:
- Growing focus on integrating digital and physical worlds using mixed reality(MR), augmented reality (AR), and virtual reality (VR).
- Metaverse has the potential to elevate several industries in numerous ways, such as manufacturing (e.g., digital prototypes), hospitality & tourism (e.g., preview and elevate customer experiences) and healthcare (e.g., accelerate disease assessment and treatment).
- High penetration rate of users from gaming, content creation, social interaction, learning, and training.
- Transform e-commerce and customer experience.
As a result, various end-user players such as Meta, Gucci, Nike, Starbucks, and Adidas are entering the metaverse in different ways to experiment with different ways to elevate the internet experience with customers.
Starbucks is set to launch a Web3-enabled loyalty program and a non-fungible token (NFT) platform that allows customers to earn and buy digital assets that unveil exclusive experiences and rewards.

Pic courtesy: medium.com
As the metaverse and NFTs continue to soar, more M&A opportunities will emerge to accelerate building immersive experiences and help to build large-scale communities underpinned by engaging content. Gaming is one of the biggest bets.

Pic courtesy: medium.com
For more information and to develop web applications using JavaScript, Hire React Developer from us as we give you a high-quality product by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”.
To develop custom web apps using JavaScript, please visit our technology page.
Content Source:
- medium.com
What’s new in TypeScript 4.9
If you’re not familiar with TypeScript, it’s a language that builds on JavaScript by adding types and type-checking. Types can describe things like the shapes of our objects, how functions can be called, and whether a property can be null or undefined. TypeScript can check these types to make sure you’re not making mistakes in your programs so you can code with confidence. It can also power other tooling like auto-completion, go-to-definition, and refactorings in the editor. In fact, if you’ve used an editor like Visual Studio or VS Code for JavaScript, that same experience is already powered by TypeScript!
To get started with TypeScript 4.9, you can get it through NuGet, or use npm with the following command:
npm install -D typescript
You can also get editor support by
- Downloading for Visual Studio 2022/2019
- Following directions for Visual Studio Code
Here’s a quick list of what’s new in TypeScript 4.9!
- The satisfies Operator
- Unlisted Property Narrowing with the in Operator
- Auto-Accessors in Classes
- Checks For Equality on NaN
- File-Watching Now Uses File System Events
- “Remove Unused Imports” and “Sort Imports” Commands for Editors
- Go-to-Definition on return Keywords
- Performance Improvements
- Correctness Fixes and Breaking Changes
What’s New Since the Beta and RC?
Since the Release Candidate, no changes have been made to TypeScript 4.9.
TypeScript 4.9 beta originally included auto-accessors in classes, along with the performance improvements described below; however, these did not get documented in the 4.9 beta blog post.
Not originally shipped in the 4.9 beta were the new “Remove Unused Imports” and “Sort Imports” commands for editors, and new go-to-definition functionality on return keywords.
The satisfies Operator
TypeScript developers are often faced with a dilemma: they want to ensure that some expression matches some type, but also want to keep the most specific type of that expression for inference purposes.
For example:
// Each property can be a string or an RGB tuple. const palette = { red: [255, 0, 0], green: "#00ff00", bleu: [0, 0, 255] // ^^^^ sacrebleu - we've made a typo! }; // We want to be able to use array methods on 'red'... const redComponent = palette.red.at(0); // or string methods on 'green'... const greenNormalized = palette.green.toUpperCase();
Notice that they’ve written bleu, whereas they probably should have written blue. They could try to catch that bleu typo by using a type annotation on palette, but they’d lose the information about each property.
type Colors = "red" | "green" | "blue"; type RGB = [red: number, green: number, blue: number]; const palette: Record<Colors, string | RGB> = { red: [255, 0, 0], green: "#00ff00", bleu: [0, 0, 255] // ~~~~ The typo is now correctly detected }; // But we now have an undesirable error here - 'palette.red' "could" be a string. const redComponent = palette.red.at(0);
The new satisfies operator lets us validate that the type of an expression matches some type, without changing the resulting type of that expression. As an example, you could use satisfies to validate that all the properties of palette are compatible with string | number[]:
type Colors = "red" | "green" | "blue"; type RGB = [red: number, green: number, blue: number]; const palette = { red: [255, 0, 0], green: "#00ff00", bleu: [0, 0, 255] // ~~~~ The typo is now caught! } satisfies Record<Colors, string | RGB>; // Both of these methods are still accessible! const redComponent = palette.red.at(0); const greenNormalized = palette.green.toUpperCase();
satisfies can be used to catch lots of possible errors. For example, they could ensure that an object has all the keys of some type, but no more:
type Colors = "red" | "green" | "blue"; // Ensure that we have exactly the keys from 'Colors'. const favoriteColors = { "red": "yes", "green": false, "blue": "kinda", "platypus": false // ~~~~~~~~~~ error - "platypus" was never listed in 'Colors'. } satisfies Record<Colors, unknown>; // All the information about the 'red', 'green', and 'blue' properties are retained. const g: boolean = favoriteColors.green;
Maybe they don’t care about if the property names match up somehow, but they do care about the types of each property. In that case, they can also ensure that all of an object’s property values conform to some type.
type RGB = [red: number, green: number, blue: number]; const palette = { red: [255, 0, 0], green: "#00ff00", blue: [0, 0] // ~~~~~~ error! } satisfies Record<string, string | RGB>; // Information about each property is still maintained. const redComponent = palette.red.at(0); const greenNormalized = palette.green.toUpperCase();
Unlisted Property Narrowing with the in Operator
As developers, they often need to deal with values that aren’t fully known at runtime. In fact, they often don’t know if properties exist, whether they’re getting a response from a server or reading a configuration file. JavaScript’s in operator can check whether a property exists on an object.
Previously, TypeScript allowed us to narrow away any types that don’t explicitly list a property.
interface RGB { red: number; green: number; blue: number; } interface HSV { hue: number; saturation: number; value: number; } function setColor(color: RGB | HSV) { if ("hue" in color) { // 'color' now has the type HSV } // ... }
Here, the type RGB didn’t list the hue and got narrowed away, and leaving us with the type HSV.
But what about examples where no type listed a given property? In those cases, the language didn’t help us much. Let’s take the following example in JavaScript:
function tryGetPackageName(context) { const packageJSON = context.packageJSON; // Check to see if we have an object. if (packageJSON && typeof packageJSON === "object") { // Check to see if it has a string name property. if ("name" in packageJSON && typeof packageJSON.name === "string") { return packageJSON.name; } } return undefined; }
Rewriting this to canonical TypeScript would just be a matter of defining and using a type for context; however, picking a safe type like unknown for the packageJSON property would cause issues in older versions of TypeScript.
interface Context { packageJSON: unknown; } function tryGetPackageName(context: Context) { const packageJSON = context.packageJSON; // Check to see if we have an object. if (packageJSON && typeof packageJSON === "object") { // Check to see if it has a string name property. if ("name" in packageJSON && typeof packageJSON.name === "string") { // ~~~~ // error! Property 'name' does not exist on type 'object. return packageJSON.name; // ~~~~ // error! Property 'name' does not exist on type 'object. } } return undefined; }
This is because while the type of packageJSON was narrowed from unknown to object, the in operator strictly narrowed to types that actually defined the property being checked. As a result, the type of packageJSON remained object.
TypeScript 4.9 makes the in operator a little bit more powerful when narrowing types that don’t list the property at all. Instead of leaving them as-is, the language will intersect their types with Record<“property-key-being-checked”, unknown>.
So in our example, packageJSON will have its type narrowed from unknown to object to object & Record<“name”, unknown> That allows us to access packageJSON.name directly and narrow that independently.
interface Context { packageJSON: unknown; } function tryGetPackageName(context: Context): string | undefined { const packageJSON = context.packageJSON; // Check to see if we have an object. if (packageJSON && typeof packageJSON === "object") { // Check to see if it has a string name property. if ("name" in packageJSON && typeof packageJSON.name === "string") { // Just works! return packageJSON.name; } } return undefined; }
TypeScript 4.9 also tightens up a few checks around how in is used, ensuring that the left side is assignable to the type string | number | symbol, and the right side is assignable to object. This helps check that we’re using valid property keys, and not accidentally checking primitives.
Auto-Accessors in Classes
TypeScript 4.9 supports an upcoming feature in ECMAScript called auto-accessors. Auto-accessors are declared just like properties on classes, except that they’re declared with the accessor keyword.
class Person { accessor name: string; constructor(name: string) { this.name = name; } }
Under the covers, these auto-accessors “de-sugar” to a get and set accessor with an unreachable private property.
class Person { #__name: string; get name() { return this.#__name; } set name(value: string) { this.#__name = name; } constructor(name: string) { this.name = name; } }
Checks For Equality on NaN
A major gotcha for JavaScript developers is checking against the value NaN using the built-in equality operators.
For some background, NaN is a special numeric value that stands for “Not a Number”. Nothing is ever equal to NaN – even NaN!
console.log(NaN == 0) // false console.log(NaN === 0) // false console.log(NaN == NaN) // false console.log(NaN === NaN) // false
But at least symmetrically everything is always not-equal to NaN.
console.log(NaN != 0) // true console.log(NaN !== 0) // true console.log(NaN != NaN) // true console.log(NaN !== NaN) // true
This technically isn’t a JavaScript-specific problem, since any language that contains IEEE-754 floats has the same behavior; but JavaScript’s primary numeric type is a floating point number, and number parsing in JavaScript can often result in NaN. In turn, checking against NaN ends up being fairly common, and the correct way to do so is to use Number.isNaN – but as we mentioned, lots of people accidentally end up checking with someValue === NaN instead.
TypeScript now errors on direct comparisons against NaN, and will suggest using some variation of Number.isNaN instead.
function validate(someValue: number) { return someValue !== NaN; // ~~~~~~~~~~~~~~~~~ // error: This condition will always return 'true'. // Did you mean '!Number.isNaN(someValue)'? }
They believe that this change should strictly help catch beginner errors, similar to how TypeScript currently issues errors on comparisons against object and array literals.
File-Watching Now Uses File System Events
In earlier versions, TypeScript leaned heavily on polling for watching individual files. Using a polling strategy meant checking the state of a file periodically for updates. On Node.js, fs.watchFile is the built-in way to get a polling file-watcher. While polling tends to be more predictable across platforms and file systems, it means that your CPU has to periodically get interrupted and check for updates to the file, even when nothing’s changed. For a few dozen files, this might not be noticeable; but on a bigger project with lots of files – or lots of files in node_modules – this can become a resource hog.
Generally speaking, a better approach is to use file system events. Instead of polling, they can announce that they’re interested in updates of specific files and provide a callback for when those files actually do change. Most modern platforms in use provide facilities and APIs like CreateIoCompletionPort, kqueue, epoll, and inotify. Node.js mostly abstracts these away by providing fs.watch. File system events usually work great, but there are lots of caveats to using them, and in turn, to using the fs.watch API. A watcher needs to be careful to consider inode watching, unavailability on certain file systems (e.g.networked file systems), whether recursive file watching is available, whether directory renames trigger events, and even file watcher exhaustion! In other words, it’s not quite a free lunch, especially if you’re looking for something cross-platform.
As a result, their default was to pick the lowest common denominator: polling. Not always, but most of the time.
Over time, they’ve provided the means to choose other file-watching strategies. This allowed us to get feedback and harden our file-watching implementation against most of these platform-specific gotchas. As TypeScript has needed to scale to larger codebases, and has improved in this area, we felt swapping to file system events as the default would be a worthwhile investment.
In TypeScript 4.9, file watching is powered by file system events by default, only falling back to polling if we fail to set up event-based watchers. For most developers, this should provide a much less resource-intensive experience when running in –watch mode, or running with a TypeScript-powered editor like Visual Studio or VS Code.
The way file-watching works can still be configured through environment variables and watchOptions – and some editors like VS Code can support watchOptions independently. Developers using more exotic set-ups where source code resides on a networked file systems (like NFS and SMB) may need to opt back into the older behavior; though if a server has reasonable processing power, it might just be better to enable SSH and run TypeScript remotely so that it has direct local file access. VS Code has plenty of remote extensions to make this easier.
“Remove Unused Imports” and “Sort Imports” Commands for Editors
Previously, TypeScript only supported two editor commands to manage imports. For our examples, take the following code:
import { Zebra, Moose, HoneyBadger } from "./zoo"; import { foo, bar } from "./helper"; let x: Moose | HoneyBadger = foo();
The first was called “Organize Imports” which would remove unused imports, and then sort the remaining ones. It would rewrite that file to look like this one:
import { foo } from "./helper"; import { HoneyBadger, Moose } from "./zoo"; let x: Moose | HoneyBadger = foo();
In TypeScript 4.3, they introduced a command called “Sort Imports” which would only sort imports in the file, but not remove them – and would rewrite the file like this.
import { bar, foo } from "./helper"; import { HoneyBadger, Moose, Zebra } from "./zoo"; let x: Moose | HoneyBadger = foo();
The caveat with “Sort Imports” was that in Visual Studio Code, this feature was only available as an on-save command – not as a manually triggerable command.
TypeScript 4.9 adds the other half, and now provides “Remove Unused Imports”. TypeScript will now remove unused import names and statements, but will otherwise leave the relative ordering alone.
import { Moose, HoneyBadger } from "./zoo"; import { foo } from "./helper"; let x: Moose | HoneyBadger = foo();
This feature is available to all editors that wish to use either command; but notably, Visual Studio Code (1.73 and later) will have support built in and will surface these commands via its Command Palette. Users who prefer to use the more granular “Remove Unused Imports” or “Sort Imports” commands should be able to reassign the “Organize Imports” key combination to them if desired.
Go-to-Definition on return Keywords
In the editor, when running a go-to-definition on the return keyword, TypeScript will now jump you to the top of the corresponding function. This can be helpful to get a quick sense of which function a return belongs to.
They expect TypeScript will expand this functionality to more keywords such as await and yield or switch, case, and default.
For more information and to develop web applications using TypeScript, Hire TypeScript Developer from us as we give you a high-quality product by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”.
To develop custom web apps using TypeScript, please visit our technology page.
Content Source:
- devblogs.microsoft.com