Our Global Presence

Canada
57 Sherway St,
Stoney Creek, ON
L8J 0J3

India
606, Suvas Scala,
S P Ring Road, Nikol,
Ahmedabad 380049

USA
1131 Baycrest Drive,
Wesley Chapel,
FL 33544
In the current age of JavaScript, Promises are the default way to handle asynchronous behavior in JavaScript. But how do they work? Why should you understand them very well?
When we make you a promise, you take our word that we will fulfill that promise.
But we don’t tell you when that promise will be fulfilled, so life goes on…
There are two possible scenarios: fulfillment or rejection.
One day, we fulfill that promise. It makes you so happy that you post about it on Twitter!
One day, we tell you that we can’t fulfill the promise.
You make a sad post on Twitter about how we didn’t do what we had promised.
Both scenarios cause an action. The first is a positive one, and the next is a negative one.
Keep this scenario in mind while going through how JavaScript Promises work.
JavaScript is synchronous. It runs from top to bottom. Every line of code below will wait for the execution of the code above it.
But when you want to get data from an API, you don’t know how fast you will get the data back. Rather, you don’t know if you will get the data or an error yet. Errors happen all the time, and those things can’t be planned. But we can be prepared for it.
So when you’re waiting to get a result from the API, your code is blocking the browser. It will freeze the browser. Neither we nor our users are happy about that at all!
Perfect situation for a Promise!
Now that we know that you should use a Promise when you make Ajax requests, we can dive into using Promises. First, we will show you how to define a function that returns a Promise. Then, we will dive into how you can use a function that returns a Promise.
Below is an example of a function that returns a Promise:
function doSomething(value) { return new Promise((resolve, reject) => { // Fake a API call setTimeout(() => { if(value) { resolve(value) } else { reject('The Value Was Not Truthy') } }, 5000) }); }
The function returns a Promise. This Promise can be resolved or rejected.
Like a real-life promise, a Promise can be fulfilled or rejected.
According to MDN Web Docs, a JavaScript Promise can have one of three states:
"- pending: initial state, neither fulfilled nor rejected. - fulfilled: meaning that the operation was completed successfully. - rejected: meaning that the operation failed."
The pending state is the initial state. This means that we have this state as soon we call the doSomething() function, so we don’t know yet if the Promise is rejected or resolved.
In the example, if the value is truthy, the Promise will be resolved. In this case, we pass the variable value in it to use it when we would call this function.
We can define our conditions to decide when to resolve our Promise.
In the example, if the value is falsy, the Promise will be rejected. In this case, we pass an error message. It’s just a string here, but when you make an Ajax request, you pass the server’s error.
Now that we know how to define a Promise, we can dive into how to use a function that returns a Promise:
// Classic Promise doSomething().then((result) => { // Do something with the result }).catch((error) => { console.error('Error message: ', error) }) // Use a returned `Promise` with Async/Await (async () => { let data = null try { data = await doSomething() // Do something with the result } catch(error) { console.error('Error message: ', error) } })();
You can recognize a function that returns a Promise by the .then() method or an await keyword. The catch will be called if there is an error in your Promise. So making error handling for a Promise is pretty straightforward.
Promises are used in a lot of JavaScript libraries and frameworks as well. But the simplest web API is the Fetch API, which you should use for making Ajax requests.
For more information and to develop your web app using front-end technology, Hire Front-End Developer from us as we give you a high-quality solution by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft“. To develop your custom website using JS, please visit our technology page.
Content Source:
In this blog, we’ll make a comparative analysis of Golang vs. Node.js for backend web development.
Now, we want to understand whether the switch from a traditional Node.js to the popular Golang is sensible or not. That’s why we would like to compare the two solutions to help you make the best choice.
Even though Golang was only launched in 2009, it can still be regarded as quite mature and robust.
However, there can be no comparison when Node.js comes into play. It has a broader audience which supports the platform, even though the API is changing somewhat.
Being an interpreted language, which is based on JavaScript, Node.js turns out to be a bit slower than other compiled languages. Node.js is not able to provide the raw performance of CPU or memory-bound tasks that Go does. This is because it’s based on C and C++, which are initially good in terms of performance.
However, when it comes to real life, both show almost equal results.
Node.js is single-threaded and uses an event-callback mechanism. This is what makes Node.js much weaker than Go. It uses co-routines (called “goroutines”) and a lightweight thread, communication among which is elegant and seamless due to channels.
Node.js is much weaker in terms of parallel processes for big projects compared to Golang, which was specifically designed to overcome possible issues in this area. Golang has the advantage due to goroutines that enable multiple threads to be performed concurrently, with parallel tasks executed simply and safely.
Front-End and Back-End
You should keep in mind that Golang is perfect for server-side applications, while Node.js is unrivaled when it comes to client-side development. Therefore, Go is an ideal decision if you want to create high-performing concurrent services on the back-end. And Node.js is your choice for the front-end.
For a long time, Golang was regarded as having a very small community because it was young and not widely implemented. Now, the situation has changed. Despite the fact that Go still fails to keep pace with Node.js support, the language boasts numerous packages (more than 100), and the number keeps growing.With JavaScript, you’ll have no difficulty in finding the right tool or package for your project; today, there are more than 100,000. Hundreds of libraries, various tutorials, and multiple platforms are at your disposal.
According to the 2017 Developer Survey by StackOverflow, JavaScript continues to occupy the leading position, being chosen by 61.2% of developers. Go showed a slightly worse result 4.3%. However, this means Go is already among the most promising languages of 2018, based even on simple Google search.
Currently, it’s still much easier to find a competent team of Node.js developers than put together one of Golang specialists. However, you can always take the IT outsourcing route and reach out to a reputable team with a strong portfolio of Go work.
When you deal with errors while using Go, you have to implement explicit error checking. This can make the process of finding the causes of errors difficult. Yet numerous developers argue that such an approach provides a cleaner application in general.
The Node.js approach with a throw/catch mechanism is more traditional and is preferred by many developers, although there are some problems with consistency at the end.
JavaScript is one of the most common coding languages nowadays. If you’re familiar with it, it will be no big deal to adapt to using Node.js programming. If you’re a newbie in JavaScript, you can leverage JavaScript’s vast community, which is always ready to share its expertise or give advice.
With Golang, you have to be ready to learn a new language, including co-routines, strict typing, pointers, and other programming concepts that may confuse you at first.
The latest trend of 2017 is blockchain technology. Many projects nowadays trumpet their blockchain-based application at every opportunity. And for good reason! The technology provides reliability, full control for the user, high-quality data, longevity, process integrity, transparency, and one more pack of buzzwords that define the viability of many startups today.
Theoretically, it’s possible to implement Node.js for developing a blockchain. However, building a blockchain in Go is a much easier solution and we highly recommend it.
In its essence, a blockchain is a distributed database of records. Go implies the implementation of an array and a map. The array keeps ordered hashes, and the map would keep hash -> blockpairs (maps are unordered). Then, we add blocks, and that’s it!
So, what should you choose: Node.js or Golang? The answer to this question depends on which type of development you need at the moment and how much you are going to scale the project.
For sure, Node.js has a broader community and a comprehensive documentation, yet, Go has a syntactically cleaner concurrency model, and it is better suited for scaling up.
Node.js, in its turn, can offer you a variety of packages, most of which are hard to re-implement in Go. In these case, it would be wiser to use Node.js.
If you feel overwhelmed by all this information or simply need some extra hands with Golang or Node.js expertise, then write a comment to initialise a conversation with other developers here.
For more information and to develop web application using Node JS, Hire Node Developer from us as we give you a high-quality product by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”. To develop custom web apps using Node JS, please visit our Hire Node Developer technology page.
Content Source:
Typescript’s 4.2 was just released recently. What awesome features does this release bring? What impact does it have in your daily life as a Developer? Should you immediately update?
Here, we will be going through all the most exciting new features. Here is a summary:
To get an editor with the latest Typescript version use Visual Studio Code Insiders. You can use a plugin alternatively for VS Code.
If you just want to have a play while reading the article you can use the Typescript Playground here. It is a fun and super easy tool to use.
Sometimes TypeScript just doesn’t resolve types properly. It may return the correct types but just not return the correct alias. The alias could be important and shouldn’t be lost along the way.
Let’s check this function:
export type BasicPrimitive = number | bigint; export function divisablePer0(value: BasicPrimitive) { if (value === 0) { return undefined; } return value; } type ReturnAlias = ReturnType<typeof divisablePer0>; // number | bigint | undefined
Notice that an undefined type needs to be added to the method return type as it’s returning undefined on some scenarios.
Before 4.2 the return of type divisablePer0 is number | bigint | undefined. That type is indeed correct but we have lost some information. The alias BasicPrimitive got lost in the process, which is a handy piece of information to have.
If we do the same on TypeScript 4.2 we get the correct alias:
export type BasicPrimitive = number | bigint; export function divisablePer0(value: BasicPrimitive) { if (value === 0) { return undefined; } return value; } type ReturnAlias = ReturnType<typeof divisablePer0>; // BasicPrimitive | undefined
Now the method divisablePer0 has the proper return type: BasicPrimitive | undefined. That makes your code more readable just by upgrading.
In the article about mapped types here we already looked at TypeScript Tuples. As a refresher, let’s revisit the example:
let arrayOptions: [string, boolean, boolean]; arrayOptions = ['config', true, true]; // works arrayOptions = [true, 'config', true]; // ^^^^^ ^^^^^^^^^ // Does not work: incompatible types function printConfig(data: string) { console.log(data); } printConfig(arrayOptions[0]);
However, we forgot to check whether Tuples can use optional elements. Let’s see what the previous example would look like:
let arrayOptions: [string, boolean?, boolean?]; arrayOptions = ['config', true, true]; // works arrayOptions = ['config', true]; // works too arrayOptions = ['config']; // works too function printConfig(data: string) { console.log(data); } printConfig(arrayOptions[0]);
Prior to 4.2 we could even use the spread operator to indicate a dynamic number of elements:
let arrayOptions: [string, ...boolean[]]; arrayOptions = ['config', true, true]; // works arrayOptions = ['config', true]; // works too arrayOptions = ['config']; // works too function printConfig(data: string) { console.log(data); } printConfig(arrayOptions[0]);
In this new TypeScript, version Tuples become more powerful. Previously, we could use the spread operator but we couldn’t define the last element types.
let arrayOptions: [string, ...boolean[], number]; arrayOptions = ['config', true, true, 12]; // works arrayOptions = ['config', true, 12]; // works too arrayOptions = ['config', 12]; // works too function printConfig(data: string) { console.log(data); } printConfig(arrayOptions[0]);
Note that something like this is invalid:
let arrayOptions: [string, ...boolean[], number?];
An optional element can’t follow a rest element. However, note that …boolean[] does accept an empty array, so that Tuple would accept [string, number] types.
Let’s see that in detail in the following example:
let arrayOptions: [string, ...boolean[], number]; arrayOptions = ['config', 12]; // works
The in operator is handy to know if a method or a property is in an object. However, in JavaScript, it will fail at runtime if it’s checked against a primitive.
Now, when you try to do this:
"method" in 23 // ^^ // Error: The right-hand side of an 'in' expression must not be a primitive.
You’ll get an error telling you explicitly what’s going on. As this operator has been made stricter this release might introduce breaking changes.
--noPropertyAccessFromIndexSignature
Yet another compiler configuration that’s always interesting. In TypeScript, you can access properties using the bracketed element syntax or the dot syntax like JavaScript. That accessor is possible when the key is a string.
interface Person { name: string; } const p: Person = { name: 'Max }; console.log(p.name) // Max console.log(p['name']) // Max
There’s a situation that has led to explicit property miss typing:
interface Person { name: string; [key: string]: string; } const p: Person = { name: 'Max }; console.log(p.namme) // undefined console.log(p['namme']) // undefined
Note how we are accessing the wrong property namme but because it fits the [key: string] implicit one, TypeScript won’t fail.
Enabling –noPropertyAccessFromIndexSignature will make TypeScript look for the explicit property when using the dotted syntax.
interface Person { name: string; [key: string]: string; } const p: Person = { name: 'Max' }; console.log(p.namme) // ^^^^^^^^ // Error console.log(p['namme']) // works fine
It’s not part of the strict configuration as this might not suit all developers and codebases.
Template literal types were introduced in 4.1 and here they got smarter. Previously, you couldn’t define a type template usage template literals.
type PropertyType = `get${string}`; function getProperty(property: PropertyType, target: any) { return target[property]; } getProperty('getName', {}); // works const propertyName = 'Name'; const x = `get${propertyName}`; getProperty(x, {}); // ^^^ // Error: Argument of type 'string' is not assignable to parameter of type '`get${string}`'
The core problem is that string expressions are resolving to type string which leads to this type of incompatibility:
const x = `get${propertyName}`; // string
However, with 4.2, template string expressions will always start out with the template literal type:
const x = `get${propertyName}`; // getName
TypeScript’s uncalled function checks apply within && and || expressions. Under –strictNullChecks you will check the following error now:
function isInvited(name: string) { return name !== 'Robert'; } function greet(name: string) { if (isInvited) { // ^^^^^^^^^ // Error: // This condition will always return true since the function is always defined. // Did you mean to call it instead? return `Welcome ${name}`; } return `Sorry you are not invited`; }
Sometimes it can be quite challenging to work out where the Typescript file definitions are pulled from. It’s sometimes a trial and error process.
It’s now possible to get a deeper insight into what’s going on, making the compiler more verbose, using the following:
tsc --explainFiles
Let’s see the result:
Pic courtesy: betterprogramming.pub
It is an awesome feature that will help you understand further Typescript’s internals.
For more information and to develop web application using TypeScript, Hire TypeScript Developer from us as we give you a high-quality product by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”. To develop custom web apps using TypeScript, please visit our technology page.
Content Source:
Performance optimization of frontend applications plays an important role in the application architecture. A higher-performing application will ensure an increase in user retention, improved user experience, and higher conversion rates.
According to Google, 53% of mobile phone users leave the site if it takes more than 3 seconds to load. At the same time, more than half of the pages tested are heavy in terms of bandwidth it utilizes to download the required assets. Don’t forget your frontend application performance directly affects its search ranking and conversion rates.
We use the Vue JS framework for our frontend applications. The challenge we had with our frontend application was with the landing page, which was taking around 3.8 secs to load with 4.2 MB of resources to be downloaded. As the response time was quite high, it was challenging to retain the users.
This article would share some of the implementation changes which we did to improve the performance of our frontend application.
Image compression is really important when optimizing frontend applications. Lighter images get downloaded faster and load with less time as compared to larger images. By compressing the images, we can make our site much lighter and hence results in slower page load times.
WebP is a modern image format that provides superior lossless and lossy compression for images on the web. Using WebP, webmasters and web developers can create smaller, richer images that make the web faster.
WebP lossless images are 26% smaller in size compared to PNGs. WebP lossy images are 25–34% smaller than comparable JPEG images at equivalent SSIM quality index.
WebP is supported by Chrome, Firefox, Edge, and Safari from version 14 and above. Please feel free to read more about WebP.
Pic courtesy: medium.com
It is evident that download time has reduced on applying webp compression.
Synchronous components loading is the process of loading the components with the import statement which is the basic way of loading the component.
Components loaded with a static import statement will be added to the existing application bundle. If code splitting is not used then the application core will become huge in size, hence affects the overall performance of the application.
The below code snippet is an example of static component loading of store and locale components.
import store from '@common/src/store' import locale from '@common/src/util/locale'
Asynchronous components loading is the process where we load chunks of our application in a lazy manner. It ensures that components are only loaded when they are needed.
Lazy loading ensures that the bundle is split and serves only the needed parts so users are not waiting to download and parse the code that will not be used.
In the below code snippet, the image of YouTube is loaded asynchronously when it’s needed.
<template> <lazy-image :lazy-src="require('@/assets/images/icon/youtube.png')" alt="YouTube" draggable="false" /> </template> <template> <img v-if="lazySrc" ref="lazy" :src="defaultImage" :data-src="lazySrc" :alt="alt" class="lazyImage" @error="handleError"> <img v-else :src="defaultImage"> </template>
To dynamically load a component, we declare a const and append an arrow function followed by the default static import statement.
We can also add a web pack magic comment. The comment will tell webpack to assign our chunk the name we provided otherwise webpack will auto-generate a name by itself.
const MainBanner = () => import(/* webpackChunkName: "c-main-banner" */ '@/components/MainBanner')
If we go to our developer tools and open the Network tab we can see that our chunk has been assigned the name we provided in the webpack’s chunk name comment.
Pic courtesy: medium.com
According to Dev Mozilla, code splitting is the process of splitting the application code into various bundles or components which can then be loaded on-demand or in parallel.
As an application is used extensively, it will have a lot of changes and new requirements with time it would have increased in terms of complexity, CSS and JavaScripts files or bundles grow in size, and also don’t forget the third-party libraries which we use.
We don’t have much control in terms of third party libraries downloads as they are required for our application to work. At least we should make sure our code is split into multiple smaller files. The features required at page load can be downloaded quickly with smaller files and with additional scripts being lazy-loaded after the page or application is interactive, thus improves the performance.
we have seen some of the frontend developers’ arguments, it will increase the number of files but the code remains the same. We completely agree with them but the main thing over here is the amount of code needed during the initial load can be reduced.
Code splitting is a feature supported by bundlers like Webpack and Browserify which can create multiple bundles that can be dynamically loaded at runtime or we can do the old school way where we can split the code required for individual vue files can be separated and loaded on demand.
Basically, third-party requests can slow down page loads for several reasons like slow networks, long DNS lookups, multiple redirects, slow servers, poor performing CDN, etc.
As third-party resources (e.g., Facebook or Twitter, or MoEngage) do not originate from your domain, their behavior is sometimes difficult to predict and they may negatively affect page experience for your users.
Using preconnect helps the browser prioritize important third-party connections and speeds up your page load as third-party requests may take a long time to process. Establishing early connections to these third-party origins by using a resource hint like preconnect can help reduce the time delay usually associated with these requests.
preconnect is useful when you know the origin of the third-party request but don’t know what the actual resource itself is. It informs your browser that the page intends to connect to another origin and that you would like this process to start as soon as possible. The browser closes any connection that isn’t used within 15 seconds, so preconnect should only be used for the most critical third-party domains.
As a part of best practices, we need to make sure we don’t have any commented code in JS or CSS files. It’s commented because we don’t want to use them, so better get rid of them as the commented code contributes to an increase in the size of the file.
As part of frontend application development, we might use some CSS frameworks but you will only use a small set of the framework styles, and a lot of unused CSS styles will be included.
According to PurgeCSS, it’s a tool to remove unused CSS. It can be part of your development workflow. PurgeCSS analyzes your content and your CSS files then it matches the selectors used in your files with the ones in your content files. It removes unused selectors from your CSS, resulting in smaller CSS files.
Also while importing anything from 3rd party libraries to run the application, we can use the tree-shaking mechanism that can avoid the unused CSS and JS code included in our 3rd party bundles. To analyze this kind of unwanted JS and CSS items from 3rd party libraries there is a tool called webpack bundle analyzer.
For more information and to develop website using Vue.js, Hire Vue.js Developer from us as we give you a high-quality product by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”. To develop Website using Vue.js, please visit our technology page.
Content Source:
Angular is one of the most popularly used frameworks with best-designed practices and tools for app development companies. Angular encourages the developers to use components to split the user interface into reusable and different pieces. There are many popular Angular component libraries available in the market that can help the Angular development companies create a robust application for their clients.
In this blog, we will go through some of the most popular Angular component libraries that one can use in 2021.
Angular components are created by using Angular and TypeScript. These components are implemented with Google’s material design. It also enables the Angular developers to split the UI into various pieces. Some of the fantastic aspects that make developers use Angular component library are –
The components in Angular are created in a similar manner to the modules. It properly depends on the developers on which to use and when to use it.
The Angular component libraries are very responsive in nature, making it crucial for website designing & development.
Angular component libraries are user-friendly and are built in a lightweight manner. It is effortless to learn and use for any Angular developer.
NGX Bootstrap is one of the most popular open-source Angular components. It gives vastness in bootstrap capabilities and helps developers utilize it on the next Angular app development project for their clients.
NGX Bootstrap has scored 5.2k stars by the GitHub community.
Features of NGX Bootstrap
AngularJS developers of the ngx-bootstrap team put effort into creating ngx-bootstrap modular, which can help the development companies implement their own whatnot, styles, and templates. All the Angular components are designed by keeping adaptivity and extensibility in mind. They can work efficiently on Desktop and Mobile platforms with the same performance level.
NGX Bootstrap offers well-written documentation that can significantly help AngularJS developers ease their work to improve software quality. The team at ngx-bootstrap provides easy to understand and complete documentation.
NGX Bootstrap has incorporated a set of guidelines that can help in enhancing the code readability and maintainability.
Components of NGX Bootstrap
The NG Bootstrap is a popular Angular development bootstrap component. It has around 7.6k stars on GitHub. When working with NG Bootstrap, there is no need to use third-party JS dependencies. It is also used for high-testing coverage.
Features of NG Bootstrap
The NG bootstrap offers widgets like modal, tablet, rating, and tooltip.
The NG bootstrap offers unique widgets and gives complete access to them. The NG bootstrap team uses HTML elements and attributes that can help AngularJS app development companies create robust applications. This library also provides focus management work and keyboard navigation.
The team at NG bootstrap tests the code with 100% convergence and reviews all the changes.
There is a bootstrap/angular-UI team created for developing widgets and also has many core Angular contributors.
Teradata is a UI platform created on Angular and Angular-Material. It comes with solutions that are a combination of the comprehensive web framework and proven design language. It even gives a quick start to the AngularJS developers in creating a modern web application. Teradata Covalent scores 2.2k GitHub scores.
Angular Command-line interface enables the developers to work with Angular-material and create, deploy, & test the application. It offers simplified stepper, file upload, user interface layout, custom web components, expansion panels, and more testing tools for both end-to-end tests and unit tests.
Features of Teradata
Components of Teradata
Nebular is an Angular 8 UI library that focuses on the brand’s adaptability and design. It has four visual themes that have support for custom CSS properties. This library is based on the Eva Design System. Nebular holds few security modules and around 40+ UI components. Some of these components are stated below. Besides this, it also has 6.7k starts in the GitHub community.
Features of Nebular
Components of Nebular
Clarity is an open-source Angular component that acts as a bridge between the HTML framework and Angular components. Clarity is the best platform for both software developers and designers.
Clarity library offers implemented data-bound components and a well-structured option to the Angular development service providers. It also owns 6.1k GitHub stars.
Features of Clarity
The Clarity team offers an understanding and easy-to-use platform that helps the developers solve a vast array of challenges.
It is the most reliable platform as it provides a high bar of quality.
Clarity is designed in a way that makes communication and collaboration of expertise very easy and rapid.
With new technologies and techniques coming into the picture, Clarity keeps on evolving.
Components of Clarity
Onsen UI is a component library that is one of the most used by the Angular development service company for creating mobile web apps for Android and iOS using JavaScript. It has 8.2 stars in the GitHub community.
Onsen UI is a library that comes with development tools and powerful CLI with Monaca. The main benefits of Onsen UI are its UI components that can easily be plugged into the mobile application.
Features of Onsen UI
Monaca is a cross-platform used for creating hybrid apps, and Onsen UI performs very well with it.
It provides ready-to-use components like toolbar, forms, side menu, and much more to give a native look. Besides this, Onsen UI also supports Android and iOS material design, making the appearance and style of the application look according to the selected platform.
The new version of Onsen UI is now enabled to provide optimized performance without slowing up the process.
Despite being a powerful tool to develop a mobile application, it is straightforward to learn and use.
Onsen UI allows the developer to work with technologies like CSS, HTML, and JavaScript. These are the technologies that they might already know, so it would take zero-time to get started with the tool.
Components of Onsen UI
For more information and to develop web application using Angular, Hire Angular Developer from us as we give you a high-quality product by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”. To develop web application using Angular, please visit our technology page.
Content Source:
Angular Version 11 release has updates across the platform including the framework, the CLI and components. Let’s dive in!
To make your apps even faster by speeding up their first contentful paint, we’re introducing automatic font inlining. During compile time Angular CLI will download and inline fonts that are being used and linked in the application. We enable this by default in apps built with version 11. All you need to do to take advantage of this optimization is update your app!
In Angular v9 we introduced Component Test Harnesses. They provide a robust and legible API surface to help with testing Angular Material components. It gives developers a way to interact with Angular Material components using the supported API during testing.
Releasing with version 11, we have harnesses for all of the components! Now developers can create more robust test suites.
We’ve also included performance improvements and new APIs. The parallel function makes working with asynchronous actions in your tests easier by allowing developers to run multiple asynchronous interactions with components in parallel. The manualChangeDetection function gives developers access to finer grained control of change detection by disabling automatic change detection in unit tests.
For more details and examples of these APIs and other new features, be sure to check out the documentation for Angular Material Test Harnesses!
We’ve made changes to the builder phase reporting to make it even more helpful during development. We are bringing in new CLI output updates to make logs and reports easier to read.
Screenshot of angular CLI output nicely formatted into columns.
Improved CLI output formatting
Pic courtesy: blog.angular.io
The Angular Language Service provides helpful tools to make development with Angular productive and fun. The current version of the language service is based on View Engine and today we’re giving a sneak peek of the Ivy-based language service. The updated language service provides a more powerful and accurate experience for developers.
Now, the language service will be able to correctly infer generic types in templates the same way the TypeScript compiler does. For example, in the screenshot below we’re able to infer that the iterable is of type string.
Screenshot of intellisense style insights in Angular templates.
Angular Language Service inferring iterable types in templates
Pic courtesy: blog.angular.io
This powerful new update is still in development but we wanted to share an update as we keep preparing it for a full release in an upcoming version.
Angular has offered support for HMR but enabling it required configuration and code changes making it less than ideal to quickly include in Angular projects. In version 11
we’ve updated the CLI to allow enabling HMR when starting an application with ng serve. To get started, run the following command:
ng serve --hmr
After the local server starts the console will display a message confirming that HMR is active:
NOTICE: Hot Module Replacement (HMR) is enabled for the dev server.
Now during development the latest changes to components, templates and styles will be instantly updated into the running application. All without requiring a full page refresh. Data typed into forms are preserved as well as scroll position providing a boost to developer productivity.
We’re bringing a faster development and build cycle by making updates to some key areas.
Now, teams can opt-in to webpack v5. Currently, you could experiment with module federation. In the future, webpack v5 will clear the path for:
Support is experimental and under development so we don’t recommend opting in for production uses.
Want to try out webpack 5? To enable it in your project, add the following section to your package.json file:
"resolutions": { "webpack": "5.4.0" }
Currently, you’ll need to use yarn to test this as npm does not yet support the resolutions property.
In previous versions of Angular, we’ve shipped a default implementation for linting (TSLint). Now, TSLint is deprecated by the project creators who recommend migration to ESLint. James Henry together with other folks from the open-source community developed a third-party solution and migration path via typescript-eslint, angular-eslint and tslint-to-eslint-config! We’ve been collaborating closely to ensure a smooth transition of Angular developers to the supported linting stack.
We’re deprecating the use of TSLint and Codelyzer in version 11. This means that in future versions the default implementation for linting Angular projects will not be available.
Head over to the official project page for a guide to incorporate angular-eslint in a project and migrate from TSLint.
In this update we’re removing support for IE9/IE10 and IE mobile. IE11 is the only version of IE still supported by Angular. We’ve also removed deprecated APIs and added a few to the deprecation list. Be sure to check this out to make sure you are using the latest APIs and following our recommended best practices.
We’ve also updated the roadmap to keep you posted on our current priorities. Some of the announcements in this post are updates on in-progress projects from the roadmap. This reflects our approach to incrementally rollout larger efforts and allows developers to provide early feedback that we can incorporate it into the final release.
We collaborated with Lukas Ruebbelke from the Angular community on updating the content of some of the projects to better reflect the value they provide to developers.
For more information and to develop web application using Angular, Hire Angular Developer from us as we give you a high-quality product by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”. To develop web application using Angular, please visit our technology page.
Content Source:
The function-like HTML segment refers to a block of HTML with the ability to accept context variables (in other words, parameters). A typical Angular component has two major parts of logic, a HTML template and a Typescript class. The capability to utilize this kind of function-like HTML segment is essential for a good shared component. It is because a shared component with only a fixed HTML template is very difficult to fit all the needs among all different use cases. Trying to satisfy all potential use cases with a single and fixed HTML template will usually end up with a large template with lots of conditional statements (like *ngIf), which is painful to read and maintain.
Here we would like to explain with an example, about how we can utilize TemplateRef to define function-like HTML segments for communication between templates, which is a good solution to deal with the large template problem.
Assume that there is a shared component DataListComponent, which takes an array of data and displays them in the view:
export interface DataTableRow { dataType: string; value: any; } @Component({ selector: 'data-list', template: ` <div *ngFor="let row of data" [ngSwitch]="row.dataType"> <div *ngSwitchCase="'string'">{{row.value}}</div> <div *ngSwitchCase="'number'"># {{row.value | number}}</div> <div *ngSwitchCase="'date'">{{row.value | date}}</div> </div> ` }) export class DataListComponent { @Input() data: DataTableRow[] = []; }
It understands only three types of data now, which are string, number and date. When we want to add more types to it, the easiest way is to simply add more switch cases. It is totally fine when such new types are generic enough that have universal representations. Yet, for data that is depending on the users, adding more switch cases can make the code very dirty.
Say we want to add the new type boolean which displays true/false in FirstComponent, yes/no in SecondComponent. If we simply go for the more-switch-cases solution, it may have something like this:
<div *ngSwitchCase="'boolean-firstComponent'"> {{ row.value ? 'true' : 'false }} </div> <div *ngSwitchCase="'boolean-secondComponent'"> {{ row.value ? 'yes' : 'no}} </div>
This approach is bad as the shared component now contains component-specific logic. Besides, this block of code is going to expand really fast when there are more new use cases in the future, which will soon become a disaster. Ideally, we want to pass HTML segments from the parents, so that we can keep those specific logic away from the shared component.
@Component({ template: ` <data-list [data]="data"> <!-- component specific logic to display true/false --> </data-list> `, ... }) export class FirstComponent {...} @Component({ template: ` <data-list [data]="data"> <!-- component specific logic to display yes/no --> </data-list> `, ... }) export class SecondComponent {...}
The logic behind is actually very straight forward. First, we need to define templates with context in the user components:
@Component({ template: ` <data-list [data]="data"> <ng-template let-value="value"> {{value ? 'true' : 'false'}} </ng-template> </data-list> `, ... }) export class FirstComponent {...} @Component({ template: ` <data-list [data]="data"> <ng-template let-value="value"> {{value ? 'yes' : 'no'}} </ng-template> </data-list> `, ... }) export class SecondComponent {...}
Next, we add the logic to read and present the template segment inside the shared component:
@Component({ selector: 'data-list', template: ` <div *ngFor="let row of data" [ngSwitch]="row.dataType"> <div *ngSwitchCase="'string'">{{row.value}}</div> <div *ngSwitchCase="'number'"># {{row.value | number}}</div> <div *ngSwitchCase="'date'">{{row.value | date}}</div> <div *ngSwitchCase="'boolean'"> <ng-container *ngTemplateOutlet="rowTemplate; context:{ value: row.value }"></ng-container> </div> </div> ` }) export class DataListComponent { @Input() data: DataTableRow[] = []; @ContentChild(TemplateRef) rowTemplate: TemplateRef<any>; }
Now we have a shared component which is capable to interpret a HTML segment from the outside. Yet, it is still not ideal. What if we have more than one templates?
This one is more tricky. Although TemplateRef is capable of parsing context, it doesn’t have a name or ID that we can rely on to distinguish multiple templates from each other programmatically. As a result, we need to add a wrapper component on top of it when we have more than one templates, so that we can add identifiers.
@Component({ selector: 'custom-row-definition', template: '' }) export class CustomRowDefinitionComponent { @Input() dataType: string; @ContentChild(TemplateRef) rowTemplate: TemplateRef<any>; }
Instead of directly retrieving the TemplateRef in the shared component, we retrieve the wrapper:
@Component({ selector: 'data-list', template: ` <div *ngFor="let row of data" [ngSwitch]="row.dataType"> <div *ngSwitchCase="'string'">String: {{row.value}}</div> <div *ngSwitchCase="'number'"># {{row.value | number}}</div> <div *ngSwitchCase="'date'">{{row.value | date}}</div> <ng-container *ngFor="let def of customRowDefinitions"> <ng-container *ngSwitchCase="def.dataType"> <ng-container *ngTemplateOutlet="def.rowTemplate; context:{ value: row.value }"></ng-container> </ng-container> </ng-container> </div> ` }) export class DataListComponent { @Input() data: DataTableRow[] = []; @ContentChildren(CustomRowDefinitionComponent) customRowDefinitions: QueryList<CustomRowDefinitionComponent>; }
(Having multiple ng-container together with structural directives may cause performance issue potentially, but it is not the main point of this article, so we leave it there for simplicity.)
In this example, we use the dataType property inside the wrapper as identifiers for the templates. As a result, we can now define multiple templates with different dataType.
@Component({ selector: 'app-root', template: ` <data-list [data]="data"> <custom-row-definition dataType="array"> <ng-template let-value="value"> {{value.join(' - ')}} </ng-template> </custom-row-definition> <custom-row-definition dataType="money"> <ng-template let-value="value"> $ {{value | number}} </ng-template> </custom-row-definition> </data-list> ` }) export class AppComponent { data: DataTableRow[] = [ { dataType: 'string', value: 'Row 1' }, { dataType: 'number', value: 500 }, { dataType: 'date', value: new Date() }, { dataType: 'array', value: [1, 2, 3, 4] }, { dataType: 'money', value: 200 } ] }
Some may ask why don’t we just use ng-content with name to project the content from the outside? The major difference is the capability to have context (parameters). ng-content is like a function without parameters, which cannot achieve real mutual communication between templates. It is like a one-way channel to merge some HTML segments from the outside, but no real interaction with the template inside. It won’t be able to achieve use cases like the example above.
For more information and to develop web application using Angular, Hire Angular Developer from us as we give you a high-quality product by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”. To develop web application using Angular, please visit our technology page.
Content Source:
The fifth major webpack release was released recently. Almost two years since the last major release (4), this release brings a lot of changes to the most used module bundler in the JavaScript ecosystem. If like me, you started your front-end career prior to the rise of webpack, you remember the pain and frustration of working with tools like gulp and grunt.
Let’s take a look at the breaking changes and the improvements that come with the new release of this incredibly popular library.
This new version concentrates on five key areas.
Slow builds are one of the most common complaints from developers about webpack. The module bundler now offers an opt-in filesystem cache. This should improve our productivity as developers by speeding up our development builds.
Improvements have been made to tree shaking (also known as dead code elimination). While previous versions of webpack were able to remove unused code, version 5 takes it even further. webpack is now able to remove code inside of modules, leading to even smaller bundle sizes. To read more about all of the optimization features of webpack 5, check out the official documentation.
After bundle size, the thing that can improve your app loading time the most is caching. With caching, returning visitors to your application experience an almost instantaneous loading experience. With webpack 5, changes made to your code that don’t change the minimized version (eg, comments or variable names), do not result in cache invalidation. This means that your users will be able to experience the performance improvements of caching for longer.
Some of the changes introduced in this version will not have any visible impact on your application’s performance today. Instead, they are meant to allow for new features and improvements in later versions of webpack 5.
These future features include using http(s) imports as module externals. This will help with the development of micro frontends. To read more about these new and exciting features, check out the official documentation here.
Another breaking change is bumping the minimum Node.JS version from 6 to 10.13.0. Dropping support for older Node.JS versions will allow the team to simplify their code, and remove workaround to support these older versions.
webpack 5 also brings a new experiments configuration option with support for WebAssembly, Async Web Assembly, Top Level Await, and outputting your bundle as a module (previously only possible with rollup).
This new feature, in short, allows multiple webpack builds to work together. It allows your application to dynamically load code from another application (aka, a different webpack build). The most popular application of module federation is to enable micro-frontend architecture.
Going over module federation in detail is beyond the scope of this article. If you are interested in learning more, be sure to read the official webpack post here.
Here’s a rundown of the breaking changes othat made it into this version and corresponding migration advice.
All items that were marked as deprecated in version 4 have been removed. If your webpack 4 build prints deprecation warnings, be sure to address those before upgrading.
The plugins IgnorePlugin and BannerPlugin accept different arguments. Read more here.
In previous versions of webpack, polyfills for native NodeJS libraries like crypto were included. These have been removed. Instead, you should use frontend focused libraries, or install the polyfills yourself.
This is a personal question, and it really depends on how you use webpack in your project. Most developers who use webpack, use a lot of plugins. You need to make sure that the plugins you use, support this new version.
If you are using NextJS, you can upgrade to webpack 5 by setting the version as a yarn resolution in your package.json. But again, if you have a custom webpack config, you will need to ensure that your config works with webpack 5.
The big advantage (and disadvantage for some) of create-react-app, is that there is no official way to customize your webpack config. For those of you using CRA, you will need to wait until react-scripts is upgraded to support webpack 5. According to a contributor, this should happen in Create-React-App version 4.1 (source).
For more information about migrating from version 4 to 5, be sure to check out the official migration guide.
This new release of webpack makes me even more excited for the future of frontend development. It’s so refreshing to see new features and improvements to a tool that We use every day. We should see it’s improvements driving innovation in the community for the next few years.
Most frontend developers don’t end up touching webpack very much, and just assume that it “just works”. We’ve said it before, and we’ll say it again: We think this is a mistake. Understanding how your build tools work makes you a stronger developer and is invaluable in debugging errors.
While the webpack team will continue to support version 4, by fixing bugs and adding features, for the foreseeable future, they suggest that you upgrade to version 5. With almost any library (except perhaps React), there comes a time when you need to make breaking changes and architectural improvements that rely on those breaking changes.
In short – while dealing with breaking changes is annoying, We don’t think it’s too much of an ask to make some changes to your application’s configuration every two years in exchange for a better and faster build system.
For more information and to develop web application using Webpack 5, Hire Front-End Developer from us as we give you a high-quality product by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”. To develop custom web apps using Webpack 5, please visit our technology page.
Content Source:
Browsers don’t understand JSX out of the box, so most React users rely on a compiler like Babel or TypeScript to transform JSX code into regular JavaScript. Many preconfigured toolkits like Create React App or Next.js also include a JSX transform under the hood.
Together with the React 17 release, we’ve wanted to make a few improvements to the JSX transform, but we didn’t want to break existing setups. This is why we worked with Babel to offer a new, rewritten version of the JSX transform for people who would like to upgrade.
Upgrading to the new transform is completely optional, but it has a few benefits:
This upgrade will not change the JSX syntax and is not required. The old JSX transform will keep working as usual, and there are no plans to remove the support for it.
React 17 RC already includes support for the new transform, so go give it a try! To make it easier to adopt, after React 17 is released, we also plan to backport its support to React 16.x, React 15.x, and React 0.14.x. You can find the upgrade instructions for different tools below.
Now let’s take a closer look at the differences between the old and the new transform.
When you use JSX, the compiler transforms it into React function calls that the browser can understand. The old JSX transform turned JSX into React.createElement(…) calls.
For example, let’s say your source code looks like this:
import React from 'react'; function App() { return <h1>Hello World</h1>; }
Under the hood, the old JSX transform turns it into regular JavaScript:
import React from 'react'; function App() { return React.createElement('h1', null, 'Hello world'); }
Note Your source code doesn't need to change in any way. We're describing how the JSX transform turns your JSX source code into the JavaScript code a browser can understand.
However, this is not perfect:
To solve these issues, React 17 introduces two new entry points to the React package that are intended to only be used by compilers like Babel and TypeScript. Instead of transforming JSX to React.createElement, the new JSX transform automatically imports special functions from those new entry points in the React package and calls them.
Let’s say that your source code looks like this:
function App() { return <h1>Hello World</h1>; }
This is what the new JSX transform compiles it to:
// Inserted by a compiler (don't import it yourself!) import {jsx as _jsx} from 'react/jsx-runtime'; function App() { return _jsx('h1', { children: 'Hello world' }); }
Note how our original code did not need to import React to use JSX anymore! (But we would still need to import React in order to use Hooks or other exports that React provides.)
This change is fully compatible with all of the existing JSX code, so you won’t have to change your components. If you’re curious, you can check out the technical RFC for more details about how the new transform works.
Note The functions inside react/jsx-runtime and react/jsx-dev-runtime must only be used by the compiler transform. If you need to manually create elements in your code, you should keep using React.createElement. It will continue to work and is not going away.
If you aren’t ready to upgrade to the new JSX transform or if you are using JSX for another library, don’t worry. The old transform will not be removed and will continue to be supported.
If you want to upgrade, you will need two things:
Since the new JSX transform doesn’t require React to be in scope, we’ve also prepared an automated script that will remove the unnecessary imports from your codebase.
Create React App support has been added and will be available in the upcoming v4.0 release which is currently in beta testing.
Next.js v9.5.3+ uses the new transform for compatible React versions.
Gatsby v2.24.5+ uses the new transform for compatible React versions.
Note If you get this Gatsby error after upgrading to React 17.0.0-rc.2, run npm update to fix it.
Support for the new JSX transform is available in Babel v7.9.0 and above.
First, you’ll need to update to the latest Babel and plugin transform.
If you are using @babel/plugin-transform-react-jsx:
# for npm users npm update @babel/core @babel/plugin-transform-react-jsx
# for yarn users yarn upgrade @babel/core @babel/plugin-transform-react-jsx
If you are using @babel/preset-react:
# for npm users npm update @babel/core @babel/preset-react
# for yarn users yarn upgrade @babel/core @babel/preset-react
Currently, the old transform (“runtime”: “classic”) is the default option. To enable the new transform, you can pass {“runtime”: “automatic”} as an option to @babel/plugin-transform-react-jsx or @babel/preset-react:
// If you are using @babel/preset-react { "presets": [ ["@babel/preset-react", { "runtime": "automatic" }] ] }
// If you're using @babel/plugin-transform-react-jsx { "plugins": [ ["@babel/plugin-transform-react-jsx", { "runtime": "automatic" }] ] }
Starting from Babel 8, “automatic” will be the default runtime for both plugins. For more information, check out the Babel documentation for @babel/plugin-transform-react-jsx and @babel/preset-react.
Note If you use JSX with a library other than React, you can use the importSource option to import from that library instead - as long as it provides the necessary entry points. Alternatively, you can keep using the classic transform which will continue to be supported.
If you are using eslint-plugin-react, the react/jsx-uses-react and react/react-in-jsx-scope rules are no longer necessary and can be turned off or removed.
{ // ... "rules": { // ... "react/jsx-uses-react": "off", "react/react-in-jsx-scope": "off" } }
TypeScript supports the JSX transform in v4.1 beta.
Flow supports the new JSX transform in v0.126.0 and up.
Because the new JSX transform will automatically import the necessary react/jsx-runtime functions, React will no longer need to be in scope when you use JSX. This might lead to unused React imports in your code. It doesn’t hurt to keep them, but if you’d like to remove them, we recommend running a “codemod” script to remove them automatically:
cd your_project npx react-codemod update-react-imports
Note If you're getting errors when running the codemod, try specifying a different JavaScript dialect when npx react-codemod update-react-imports asks you to choose one. In particular, at this moment the "JavaScript with Flow" setting supports newer syntax than the "JavaScript" setting even if you don't use Flow. File an issue if you run into problems. Keep in mind that the codemod output will not always match your project's coding style, so you might want to run Prettier after the codemod finishes for consistent formatting.
Running this codemod will:
For example,
import React from 'react'; function App() { return <h1>Hello World</h1>; }
will be replaced with
function App() { return <h1>Hello World</h1>; }
If you use some other import from React – for example, a Hook – then the codemod will convert it to a named import.
For example,
import React from 'react'; function App() { const [text, setText] = React.useState('Hello World'); return <h1>{text}</h1>; }
will be replaced with
import { useState } from 'react'; function App() { const [text, setText] = useState('Hello World'); return <h1>{text}</h1>; }
In addition to cleaning up unused imports, this will also help you prepare for a future major version of React (not React 17) which will support ES Modules and not have a default export.
For more information and to develop web application using React JS, Hire React Developer from us as we give you a high-quality product by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”. To develop custom web apps using React JS, please visit our technology page.
Content Source:
TypeScript 4.0 is a major milestone in the TypeScript programming language and has currently leapfrogged 3.9 to become the latest stable version. In this post, we’ll look at the new features TypeScript 4.0 offers.
To get started using 4.0, you can install it through NuGet or via NPM:
npm i typescript
You can test the code using the TypeScript playground or a text editor that supports TypeScript. I recommend using Visual Studio Code, you can get set up instructions here.
In a nutshell, we can say TypeScript is strongly typed JavaScript. This means that it requires developers to accurately specify the format of their data types, consequently, it allows the compiler to catch type errors at compile time and therefore, give a better developer experience.
This process of accurately specifying the format of data types is known as type declaration or type definitions — it is also called typings or simple types.
With this feature, TypeScript gives types to higher-order functions such as curry, concat, and apply. These are functions that take a variable number of parameters.
Consider a small contrived example of the concat function below:
function simpleConcat(arr1, arr2) { return [...arr1, ...arr2]; } console.log(simpleConcat([1,2,3], [5,6])) // [1, 2, 3, 5, 6]
There is currently no easy way to type this in TypeScript. The only typing strategy available currently is to write overloads.
Function or method overloading refers to a feature in TypeScript that allows us to create multiple functions having the same name but a different number of parameters or types.
Consider this:
function concat1<T>(arr1: [T], arr2: []): [T] { return [...arr1, ...arr2] } function concat2<T1, T2>(arr1: [T1, T2], arr2: []): [T1, T2] { return [...arr1, ...arr2] }; function concat6<T1, T2, T3, T4, T5, T6>(arr1: [T1, T2, T3, T4, T5, T6], arr2: []): [T1, T2, T3, T4, T5, T6] { return [...arr1, ...arr2] } function concat7<T1, T2, T3, T4, T5, T6, A1, A2, A3, A4>(arr1: [T1, T2, T3, T4, T5, T6], arr2: [A1, A2, A3, A4]): [T1, T2, T3, T4, T5, T6, A1, A2, A3, A4] { return [...arr1, ...arr2] } console.log("concated 1", concat1([1], [])) console.log("concated 2", concat2([1,2], [])) console.log("concated 6", concat6([1,2,3,4,5,6], [])) console.log("concated 10", concat10([1,2,3,4,5,6], [10, 11, 12, 13]))
From the example above we can see that the number of overloads increases as the number of items in the array increases which is suboptimal. In concat6 we had to write 6 overloads even when the second array is empty and this quickly grew 10 overloads in concat10 when the second array had just 4 items.
Also, we can only get correct types for as many overloads as we write.
TypeScript 4.0 comes with significant inference improvements. It allows spread elements in tuple types to be generic and to occur anywhere in the tuple.
In older versions, REST element must be last in a tuple type. And TypeScript would throw an error if this were not the case:
// Tuple speard items are generic function concatNumbers<T extends Number[]>(arr: readonly [Number, ...T]) { // return something } // spread occuring anywhere in the tuble valid in 4.0 beta. type Name = [string, string]; type ID = [number, number]; type DevTuples = [...Name, ...Numbers]
Given these two additions, we can write a better function signature for our concat function:
type Arr = readonly any[]; function typedConcat<T extends Arr, U extends Arr>(arr1: T, arr2: U): [...T, ...U] { return [...arr1, ...arr2]; } console.log("concated", typedConcat([1,2,3,4,5], [66,77,88,99]))
This is a pithy addition to TypeScript aimed at improving code readability.
Consider the code below:
type Period = [Date, Date]; // Example 1 older version type Period = [StartDate: Date, EndDate: Date]; // Example 2 4.0 beta function getAge(): [birthDay: Date, today: Date] { // ... }
Previously, TypeScript developers use comments to describe tuples because the types themselves (date, number, string) don’t adequately describe what the elements represent.
From our small contrived example above, “example 2” is way more readable because of the labels added to the tuples.
When labelling tuples all the items in the tuples must be labelled.
Consider the code below:
type Period = [startDate: Date, Date]; // incorrect type Period = [StartDate: Date, EndDate: Date]; // correct
In TypeScript 4.0, we can now use control flow analysis to determine the types of properties in classes when noImplicitAny is enabled. Let’s elaborate on this with some code samples.
Consider the code below:
// Compile with --noImplicitAny class CalArea { Square; // string | number constructor(area: boolean, length: number, breadth: number) { if (!area) { this.Square = "No area available"; } else { this.Square = length * breadth; } } }
Previously, the code above would not compile if noImplicitAny is enabled. This is because property types are only inferred from direct initializations, so their types must either be defined explicitly or using an initial initializer.
However, TypeScript 4.0 can use control flow analysis of this.Square assignments in constructors to determine the types of Square.
Currently, in JavaScript, a lot of binary operators can be combined with the assignment operator to form a compound assignment operator. These operators perform the operation of the binary operator on both operands and assigned the value to the left operand:
// compound operators foo += bar // foo = foo + bar foo -= bar // foo = foo - bar foo *= bar // foo = foo * bar foo /= bar // foo = foo/bar foo %= bar // foo = foo % bar
The list goes on but with three exceptions:
|| // logical or operator && // logical and operator ?? // nullish coalescing operator
TypeScript 4.0 beta would allow us to combine these three with the assignment operator thus forming three new compound operators:
x ||= y // x || (x = y) x &&= y // x && (x = y) x ??= y // x ?? (x = y )
Previously, when we use the try … catch statement in TypeScript, the catch clause is always typed as any, consequently, our error-handling code lacks any type-safety which should prevent invalid operations. I will elaborate with some code samples below:
try { // ... }catch(error) { error.message error.toUpperCase() error.toFixed() // ... }
From the code above we can see that we are allowed to do anything we want — which is really what we don’t want.
TypeScript 4.0 aims to resolve this by allowing us to set the type of the catch variable as unknown. This is safer because it’s meant to remind us to do a manual type checking in our code:
try { // ... }catch(error: unknown) { if(typeof error === "String") { error.toUpperCase() } if(typeof error === "number") { error.toFixed() } // ... }
TypeScript already supports jsxFactory compiler option, this feature, however, adds a new compiler option known as jsxFragmentFactory which enables users to customize the React.js fragment factory in the tsconfig.json:
{ "compilerOptions": { "target": "esnext", "module": "commonjs", "jsx": "react", // React jsx compiler option "jsxFactory": "createElement", // transforms jsx using createElement "jsxFragmentFactory": "Fragment" // transforms jsx using Fragment } }
The above tsconfig.json configuration transforms JSX in a way that is compatible with React thus a JSX snippet such as <article/> would be transformed with createElement instead of React.createElement. Also, it tells TypeScript to use Fragment instead of React.Fragment for JSX transformation.
TypeScript 4.0 also features great performance improvements in –build mode scenarios and also allows us to use the –noEmit flag while still leveraging –incremental compiles. This was possible in older versions.
In addition, there are several editor improvements such as @deprecated JSDoc annotations recognition, smarter auto-imports, partial editing mode at startup (which aimed to speed up startup time).
For more information and to develop web application using TypeScript, Hire TypeScript Developer from us as we give you a high-quality product by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”. To develop custom web apps using TypeScript, please visit our technology page.
Content Source:
57 Sherway St,
Stoney Creek, ON
L8J 0J3
606, Suvas Scala,
S P Ring Road, Nikol,
Ahmedabad 380049
1131 Baycrest Drive,
Wesley Chapel,
FL 33544
57 Sherway St,
Stoney Creek, ON
L8J 0J3
606, Suvas Scala,
S P Ring Road, Nikol,
Ahmedabad 380049
1131 Baycrest Drive,
Wesley Chapel,
FL 33544
© 2025 — HK Infosoft. All Rights Reserved.
© 2025 — HK Infosoft. All Rights Reserved.
T&C | Privacy Policy | Sitemap