Our Global Presence
Canada
57 Sherway St,
Stoney Creek, ON
L8J 0J3
India
606, Suvas Scala,
S P Ring Road, Nikol,
Ahmedabad 380049
USA
1131 Baycrest Drive,
Wesley Chapel,
FL 33544
There are many ways one can structure an Angular app. But this is how we structure our applications for extensive flexibility, scalability, and small initial bundle size.
Core directory is the place where you put singleton services, injection tokens, constants, app configurations, pipes, interceptors, guards, auth service, utils, etc. that will be used app-wide. If there is something which is specific to the application itself, deployment, CI/CD, API, and the Developer – chances are, it belongs to the core.
Business features live in this directory. Make a module per feature. That module can contain components, directives, pipes, services, interfaces, enums, utils, and so on. The idea is to keep things close. So, a pipe, that is solely used in the Speakers module should not be defined in the global scope or inside Core. The same goes for any other angular building block required by this module only.
Components are prefixed according to the module name e.g.- if the module name is SpeakersModule, components would be named SpeakerAbcComponent, SpeakerXyzComponent etc.
Keep the component tree inside the directory flat. That means, if SpeakerListComponent is the parent and SpeakerListItemComponent is child, do not create speaker-list-item component inside the speaker-list directory. The prefixed naming should be clear to indicate such a relation. The idea is to be able to see what components reside in the module at a glance.
Feature modules can import other features and obviously from shared modules.
Consider shared modules a minWe library for your UWe components. They are not specific to a single business feature. They should be super dumb that you can take all the components, drop in another angular project, and expect to work (given the dependencies are met). You might already know that wrapping UWe components provided by other libraries such as Material, ng-zorro-antd, ngx-bootstrap, etc. is a good practice. It protects you from their APWe changes and allows you to replace the underlying library if required. Components in shared modules are a good place for such wrapping.
Do not make a giant SharedModule, rather granularize each atomic feature into its own module (see Fig-3). Criss-cross import of atomic shared modules is allowed, but try to minimize as best as possible. To bring a flavor of a tiny library, you could even prefix the directories & modules with your angular application’s custom prefix (by default it is app ).
Pages directory is the most interesting part of this structure. Think of it like a sink, where feature modules fall into but nothing comes out (i.e- no exported member). In these modules, you do not declare any component other than the page.
Page controllers have no business logic. They are merely the presenter and orchestrates components from business feature modules. Let’s say – home page. It will contain a header, a hero section, articles, comments, contact, etc. sections – all coming from respective feature modules!
@NgModule({ declarations: [HomePageComponent], imports: [ CommonModule, ArticlesModule, CommentsModule, ContactModule, HeadersModule, HomePageRoutingModule, ], }) export class HomePageModule {}
How a fictional home-page.component.ts might look like:
<app-header-default></app-header-default> <main class="container"> <app-hero-content></app-hero-content> <app-article-list></app-article-list> <app-comment-list-latest></app-comment-list-latest> <app-contact-form></app-contact-form> </main> <app-footer-default></app-footer-default>
They can take help from a page-specific service that combines data and state for that page only. You should provide such service to the page component and NOT in root. Otherwise, the state may persist even after you navigate away from the page because the page component will get destroyed but not the page service.
// home-page.service.ts @Injectable() export class HomePageService {} // home-page.component.ts @Component({ ... providers: [HomePageService] } export class HomePageComponent { constructor(private homePageService: HomePageService){} }
The most important purpose of page modules is that each module is loaded lazily to make the app performant and lite.
Pro-tip: If you define a single page component per module, then you can claim a further reduction in the initial bundle size. This practice also organizes all routes in a single source (namely AppRoutingModule) which is easier to manage. Then, your app-routing.module.ts file may look like this:
const appRoutes: Routes = [ { path: '', loadChildren: () => import('./pages/home-page/home-page.module').then((m) => m.HomePageModule), }, { path: 'home', redirectTo: '', pathMatch: 'full', }, { path: 'products/:id', // <-------- NOTE 1. Child route loadChildren: () => import('./pages/product-details-page/product-details-page.module').then((m) => m.ProductDetailsPageModule), }, { path: 'products', // <--------- NOTE 2. Parent route loadChildren: () => import('./pages/product-list-page/product-list-page.module').then((m) => m.ProductListPageModule), }, { path: 'checkout/pay', loadChildren: () => import('./pages/checkout-payment-page/checkout-payment-page.module').then((m) => m.CheckoutPaymentPageModule), }, { path: 'checkout', loadChildren: () => import('./pages/checkout-page/checkout-page.module').then((m) => m.CheckoutPageModule), }, { path: '**', loadChildren: () => import('./pages/not-found-page/not-found-page.module').then((m) => m.NotFoundPageModule), }, ]
Notes 1 & 2: Since route declarations are parsed top-to-bottom, be sure to declare child paths before the parent path. This will ensure lazy-loading chunks fetched correctly. Otherwise, if you define the parent route first, then visiting any child route will also load the parent route’s module chunk unnecessarily. You can see the difference in DevTools. Here is my experiment when We put parent route first (Fig-5.1) VS child route first (Fig-5.2)
For more information and to develop web application using Angular, Hire Angular Developer from us as we give you a high-quality product by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”. To develop web application using Angular, please visit our technology page.
Content Source:
Making frontend applications is not as simple as it used to be. Frontend frameworks like React and Vue.js rely heavily on APIs. This adds complexity to our app because we need to manage how we call these APIs. One solution is to simplify the process by writing clean API calls.
But wait, what are “clean API calls”? To me, that means the proper structuring of API calls, making them easy to read and maintain. First, we do this by utilizing the single-responsibility principle. Each function must have a single responsibility, and with this principle in mind, we need to separate the logic for each endpoint.
The other thing we try to consider is the DRY principle (“Don’t Repeat Yourself”). This is very important – arguably more so in the case of frontend API providers – because it gives a sense of tidiness in the code, thus improving readability. we use Axios because it gives features such as interceptors, defaults, etc. It reduces the amount of boilerplate code you need to write for each API endpoint.
There are many ways to achieve this. You can either use the Fetch API or you can use a third-party library called Axios. By the title of this article, you can guess that we prefer Axios. Why? Let’s weigh in on the pros and cons.
What we like most about Axios is that it is very simple to use. The programming API is so easy to use that we have gotten really used to it. Well, this might be too personal of a preference, but you can try it yourself. we have used jQuery’s AJAX and the Fetch API, and we would rank Axios above all of them – although not by too large of a margin since all three of them are nice to work with.
Honesty, you wouldn’t think about this feature until you needed it. we mean, most people have modern browsers, but if some of your customers aren’t most people, they might not be able to use your app if it isn’t backward-compatible. The Fetch API is relatively new and old browsers aren’t capable of using it. Otherwise, libraries like Axios and jQuery’s AJAX are built on top of JavaScript’s XMLHttpRequest. For those of you who are wondering, XMLHttpRequest is an old version of JavaScript’s built-in HTTP calling mechanism.
You can do a lot with Axios – a whole lot. For example, as of the writing of this article, the Fetch API does not have built-in request/response interceptors. You have to use third parties. Compared to Fetch, writing clean APIs using Axios is very simple. Axios already has so many built-in conveniences. To name a few, you can set default headers and default base URLs using Axios.
we have used Axios for long enough to understand that this library can be overkill for small apps. If you only need to use its GET and POST-APIs, you probably are better off with the Fetch API anyway. Fetch is native to JavaScript, whereas Axios is not. This brings us to the next point.
This second point corresponds to the first one perfectly. One of the main reasons we avoid the use of Axios for small apps is the fact that it bloats your production build size. Sure, you might not notice a size spike for large apps like in e-commerce and such. But you will notice a huge increase if you are making a simple portfolio site. The lesson to be learned? Use the right tools for the right job.
Look, let me just start by saying that this third point is really subjective and some people might have opposite views. Axios is a third party. Yes, you read that right. Unlike Fetch, it is not native to the browser. You are depending on the community to maintain your precious app. Then again, most apps these days do use open-source products. So would it be a problem? Not really. Again, this is a preference. we are not advising you to reinvent the wheel. Just understand that you don’t own the wheel.
Axios is available in multiple JavaScript repositories. You can access it using yarn and NPM. If you are using regular HTML, you can import it from CDNs like jsDelivr, Unpkg, or Cloudflare.
Assuming you are using NPM, we need to install Axios using this command:
npm install -S axios
If there are no errors in the installation, you can continue to the next step. You can check alternative installation methods on GitHub.
What are Axios clients? Clients are how we set default parameters for each API call. We set our default values in the Axios clients, then we export the client using JavaScript’s export default. Afterward, we can just reference the client from the rest of our app.
First, make a new file preferably named apiClient.js and import Axios:
import axios from 'axios';
Then make a client using axios.create:
const axiosClient = axios.create({ baseURL: `https://api.example.com`, headers: { 'Accept': 'application/json', 'Content-Type': 'application/json' } });
As you can see, we initiated the base URLs and default headers for all our HTTP calls.
When calling interacting APIs – especially when there is authentication involved – you will need to define conditions when your call is unauthorized and make your application react appropriately. Interceptors are perfect for this use case.
Let’s say that we need our application to redirect us to our home page when our cookies expire, and when our cookies expire, the API will return a 401 status code. This is how you would go about it:
axiosClient.interceptors.response.use( function (response) { return response; }, function (error) { let res = error.response; if (res.status == 401) { window.location.href = “https://example.com/login”; } console.error(“Looks like there was a problem. Status Code: “ + res.status); return Promise.reject(error); } );
Simple, right? After you’ve defined your client and attached an interceptor, you just need to export your client to be used on other pages.
After configuring your Axios client, you need to export it to make it available for the entire project. You can do this by using the export default feature:
export default { axiosClient );
Now we have made our Axios client available for the entire project. Next, we will be making API handlers for each endpoint.
Before we continue, we thought it would be useful to show you how to arrange your subfolders. Instead of writing a long comprehensive explanation, we think it would be better for me to give you an image of what we are talking about:
This assumes we will have admin, user, and product endpoints. We will home the apiClient.js file inside the root of the network folder. The naming of the folder or even the structure is just our personal preference.
The endpoints will be put inside a lib folder and separated by concerns in each file. For example, for authentication purposes, user endpoints would be put inside the user file. Product-related endpoints would be put inside the product file.
Now we will be writing the API handler. Each endpoint will have its asynchronous function with custom parameters. All endpoints will use the client we defined earlier. In the example below, we will write two API handlers to get new products and add new products:
import { axiosClient } from "../apiClient"; export function getProduct(){ return axiosClient.get('/product'); } export function addProduct(data){ return axiosClient.post('/product', JSON.stringify(data)); }
This pretty much sums up how we would write an API handler, and as you can see, each API call is clean and it all applies the single-responsibility principle. You can now reference these handlers on your main page.
Assuming that you are using an NPM project for all of this, you can easily reference your JavaScript API handlers using the import method. In this case, we will be using the getProduct endpoint:
import { getProduct } from "../network/lib/product"; getProduct() .then(function(response){ // Process response and // Do something with the UI; });
There you have it: a clean no-fuss API handler. You’ve successfully made your app much more readable and easier to maintain.
For more information and to develop your web app using front-end technology, Hire Front-End Developer from us as we give you a high-quality solution by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft“. To develop your custom website using JS, please visit our technology page.
Content Source:
In this post, we will cover five simple React hooks that you will find handy in any project. These hooks are useful no matter the features of the application. For each hook, we will provide the implementation and the client code sample.
Web applications use modals extensively and for various reasons. When working with modals, you quickly realize that managing their state is a tedious and repetitive task. And when you have code that’s repetitive and tedious, you should take time to abstract it. That’s what useModalState
does for managing modal states.
Many libraries provide their version of this hook, and one such library is Chakra UI. If you want to learn more about Chakra UI, check out my blog post here.
The implementation of the hook is very simple, even trivial. But in my experience, it pays off using it rather than rewriting the code for managing the modal’s state each time.
import React from "react"; import Modal from "./Modal"; export const useModalState = ({ initialOpen = false } = {}) => { const [isOpen, setIsOpen] = useState(initialOpen); const onOpen = () => { setIsOpen(true); }; const onClose = () => { setIsOpen(false); }; const onToggle = () => { setIsOpen(!isOpen); }; return { onOpen, onClose, isOpen, onToggle }; };
And here’s an example of client code using the hook:
const Client = () => { const { isOpen, onToggle } = useModalState(); const handleClick = () => { onToggle(); }; return ( <div> <button onClick={handleClick} /> <Modal open={isOpen} /> </div> ); }; export default Client;
useConfirmationDialog
is another modal-related hook that we use quite often. It’s a common practice to ask users for confirmations when performing sensitive actions, like deleting records. So it makes sense to abstract that logic with a hook. Here’s a sample implementation of the useConfirmationDialog
hook:
import React, { useCallback, useState } from 'react'; import ConfirmationDialog from 'components/global/ConfirmationDialog'; export default function useConfirmationDialog({ headerText, bodyText, confirmationButtonText, onConfirmClick, }) { const [isOpen, setIsOpen] = useState(false); const onOpen = () => { setIsOpen(true); }; const Dialog = useCallback( () => ( <ConfirmationDialog headerText={headerText} bodyText={bodyText} isOpen={isOpen} onConfirmClick={onConfirmClick} onCancelClick={() => setIsOpen(false)} confirmationButtonText={confirmationButtonText} /> ), [isOpen] ); return { Dialog, onOpen, }; }
And here’s an example of the client code:
import React from "react"; import { useConfirmationDialog } from './useConfirmationDialog' function Client() { const { Dialog, onOpen } = useConfirmationDialog({ headerText: "Delete this record?", bodyText: "Are you sure you want delete this record? This cannot be undone.", confirmationButtonText: "Delete", onConfirmClick: handleDeleteConfirm, }); function handleDeleteConfirm() { //TODO: delete } const handleDeleteClick = () => { onOpen(); }; return ( <div> <Dialog /> <button onClick={handleDeleteClick} /> </div> ); } export default Client;
One thing to note here is that this implementation works fine as long as your confirmation modal doesn’t have any controlled input elements. If you do have controlled inputs, it’s best to create a separate component for your modal. That’s because you don’t want the content of the modal, including those inputs, to re-render each time the user types something.
Properly handling async actions in your application is trickier than it seems at first. There are multiple state variables that you need to keep track of while the task is running. You want to keep the user informed that the action is processing by displaying a spinner. Also, you need to handle the errors and provide useful feedback when they happen. So it pays off to have an established framework for dealing with async tasks in your React project. And that’s where you might find useAsync
useful. Here’s an implementation of the useAsync
hook:
export const useAsync = ({ asyncFunction }) => { const [loading, setLoading] = useState(false); const [error, setError] = useState(null); const [result, setResult] = useState(null); const execute = useCallback( async (...params) => { try { setLoading(true); const response = await asyncFunction(...params); setResult(response); } catch (e) { setError(e); } setLoading(false); }, [asyncFunction] ); return { error, result, loading, execute }; };
The client code:
import React from "react"; export default function Client() { const { loading, result, error, execute } = useAsync({ asyncFunction: someAsyncTask, }); async function someAsyncTask() { // perform async task } const handleClick = () => { execute(); }; return ( <div> {loading && <p>loading</p>} {!loading && result && <p>{result}</p>} {!loading && error?.message && <p>{error?.message}</p>} <button onClick={handleClick} /> </div> ); }
The hook is not hard to write yourself and that’s what we often do. But it might make sense for you to use a more mature library implementation instead. Here’s a great option.
Form validation is another part of React applications that people often find tedious. With that said, there are plenty of great libraries to help with forms management in React. One great alternative is formik. However, each of those libraries has a learning curve. And that learning curve often makes it not worth using in smaller projects. Particularly if you have others working with you and they are not familiar with those libraries.
But it doesn’t mean we can’t have simple abstractions for some of the code we often use. One such piece of code that we like to abstract is error validation. Checking forms before submitting to API and displaying validation results to the user is a must-have for any web application. Here’s an implementation of a simple useTrackErrors
hook that can help with that:
import React, { useState } from "react"; import FormControl from "./FormControl"; import Input from "./Input"; import onSignup from "./SignupAPI"; export const useTrackErrors = () => { const [errors, setErrors] = useState({}); const setErrors = (errsArray) => { const newErrors = { ...errors }; errsArray.forEach(({ key, value }) => { newErrors[key] = value; }); setErrors(newErrors); }; const clearErrors = () => { setErrors({}); }; return { errors, setErrors, clearErrors }; };
And here’s the client implementation:
import React, { useState } from "react"; import FormControl from "./FormControl"; import Input from "./Input"; import onSignup from "./SignupAPI"; export default function Client() { const { errors, setErrors, clearErrors } = useTrackErrors(); const [name, setName] = useState(""); const [email, setEmail] = useState(""); const handleSignupClick = () => { let invalid = false; const errs = []; if (!name) { errs.push({ key: "name", value: true }); invalid = true; } if (!email) { errs.push({ key: "email", value: true }); invalid = true; } if (invalid) { setErrors(errs); return; } onSignup(name, email); clearErrors(); }; const handleNameChange = (e) => { setName(e.target.value); setErrors([{ key: "name", value: false }]); }; const handleEmailChange = (e) => { setEmail(e.target.value); setErrors([{ key: "email", value: false }]); }; return ( <div> <FormControl isInvalid={errors["name"]}> <FormLabel>Full Name</FormLabel> <Input onKeyDown={handleKeyDown} onChange={handleNameChange} value={name} placeholder="Your name..." /> </FormControl> <FormControl isInvalid={errors["email"]}> <FormLabel>Email</FormLabel> <Input onKeyDown={handleKeyDown} onChange={handleEmailChange} value={email} placeholder="Your email..." /> </FormControl> <button onClick={handleSignupClick}>Sign Up</button> </div> ); }
Debouncing has a broad use in any application. The most common use is throttling expensive operations. For example, preventing the application from calling the search API every time the user presses a key and letting the user finish before calling it. The useDebounce
hook makes throttling such expensive operations easy. Here’s a simple implementation that’s written using AwesomeDebounceLibrary under the hood:
import AwesomeDebouncePromise from "awesome-debounce-promise"; const debounceAction = (actionFunc, delay) => AwesomeDebouncePromise(actionFunc, delay); function useDebounce(func, delay) { const debouncedFunction = useMemo(() => debounceAction(func, delay), [ delay, func, ]); return debouncedFunction; }
And here’s the client code:
import React from "react"; const callAPI = async (value) => { // expensice API call }; export default function Client() { const debouncedAPICall = useDebounce(callAPI, 500); const handleInputChange = async (e) => { debouncedAPICall(e.target.value); }; return ( <form> <input type="text" onChange={handleInputChange} /> </form> ); }
One thing to note with this implementation: You need to ensure that the expensive function is not recreated on each render. Because that will reset the debounced version of that function and wipe out its inner state. There are two ways to achieve that:
And that’s it for this post. There are many useful hook libraries worth checking out, and if you’re interested, here’s a great place to start. While there are many helpful custom hooks out there, these five are the ones that you will find handy in any React project.
For more information and to develop web application using React JS, Hire React Developer from us as we give you a high-quality product by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”. To develop custom web apps using React JS, please visit our technology page.
Content Source:
In the current age of JavaScript, Promises are the default way to handle asynchronous behavior in JavaScript. But how do they work? Why should you understand them very well?
When we make you a promise, you take our word that we will fulfill that promise.
But we don’t tell you when that promise will be fulfilled, so life goes on…
There are two possible scenarios: fulfillment or rejection.
One day, we fulfill that promise. It makes you so happy that you post about it on Twitter!
One day, we tell you that we can’t fulfill the promise.
You make a sad post on Twitter about how we didn’t do what we had promised.
Both scenarios cause an action. The first is a positive one, and the next is a negative one.
Keep this scenario in mind while going through how JavaScript Promises work.
JavaScript is synchronous. It runs from top to bottom. Every line of code below will wait for the execution of the code above it.
But when you want to get data from an API, you don’t know how fast you will get the data back. Rather, you don’t know if you will get the data or an error yet. Errors happen all the time, and those things can’t be planned. But we can be prepared for it.
So when you’re waiting to get a result from the API, your code is blocking the browser. It will freeze the browser. Neither we nor our users are happy about that at all!
Perfect situation for a Promise!
Now that we know that you should use a Promise when you make Ajax requests, we can dive into using Promises. First, we will show you how to define a function that returns a Promise. Then, we will dive into how you can use a function that returns a Promise.
Below is an example of a function that returns a Promise:
function doSomething(value) { return new Promise((resolve, reject) => { // Fake a API call setTimeout(() => { if(value) { resolve(value) } else { reject('The Value Was Not Truthy') } }, 5000) }); }
The function returns a Promise. This Promise can be resolved or rejected.
Like a real-life promise, a Promise can be fulfilled or rejected.
According to MDN Web Docs, a JavaScript Promise can have one of three states:
"- pending: initial state, neither fulfilled nor rejected. - fulfilled: meaning that the operation was completed successfully. - rejected: meaning that the operation failed."
The pending state is the initial state. This means that we have this state as soon we call the doSomething() function, so we don’t know yet if the Promise is rejected or resolved.
In the example, if the value is truthy, the Promise will be resolved. In this case, we pass the variable value in it to use it when we would call this function.
We can define our conditions to decide when to resolve our Promise.
In the example, if the value is falsy, the Promise will be rejected. In this case, we pass an error message. It’s just a string here, but when you make an Ajax request, you pass the server’s error.
Now that we know how to define a Promise, we can dive into how to use a function that returns a Promise:
// Classic Promise doSomething().then((result) => { // Do something with the result }).catch((error) => { console.error('Error message: ', error) }) // Use a returned `Promise` with Async/Await (async () => { let data = null try { data = await doSomething() // Do something with the result } catch(error) { console.error('Error message: ', error) } })();
You can recognize a function that returns a Promise by the .then() method or an await keyword. The catch will be called if there is an error in your Promise. So making error handling for a Promise is pretty straightforward.
Promises are used in a lot of JavaScript libraries and frameworks as well. But the simplest web API is the Fetch API, which you should use for making Ajax requests.
For more information and to develop your web app using front-end technology, Hire Front-End Developer from us as we give you a high-quality solution by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft“. To develop your custom website using JS, please visit our technology page.
Content Source:
In this blog, we’ll make a comparative analysis of Golang vs. Node.js for backend web development.
Now, we want to understand whether the switch from a traditional Node.js to the popular Golang is sensible or not. That’s why we would like to compare the two solutions to help you make the best choice.
Even though Golang was only launched in 2009, it can still be regarded as quite mature and robust.
However, there can be no comparison when Node.js comes into play. It has a broader audience which supports the platform, even though the API is changing somewhat.
Being an interpreted language, which is based on JavaScript, Node.js turns out to be a bit slower than other compiled languages. Node.js is not able to provide the raw performance of CPU or memory-bound tasks that Go does. This is because it’s based on C and C++, which are initially good in terms of performance.
However, when it comes to real life, both show almost equal results.
Node.js is single-threaded and uses an event-callback mechanism. This is what makes Node.js much weaker than Go. It uses co-routines (called “goroutines”) and a lightweight thread, communication among which is elegant and seamless due to channels.
Node.js is much weaker in terms of parallel processes for big projects compared to Golang, which was specifically designed to overcome possible issues in this area. Golang has the advantage due to goroutines that enable multiple threads to be performed concurrently, with parallel tasks executed simply and safely.
Front-End and Back-End
You should keep in mind that Golang is perfect for server-side applications, while Node.js is unrivaled when it comes to client-side development. Therefore, Go is an ideal decision if you want to create high-performing concurrent services on the back-end. And Node.js is your choice for the front-end.
For a long time, Golang was regarded as having a very small community because it was young and not widely implemented. Now, the situation has changed. Despite the fact that Go still fails to keep pace with Node.js support, the language boasts numerous packages (more than 100), and the number keeps growing.With JavaScript, you’ll have no difficulty in finding the right tool or package for your project; today, there are more than 100,000. Hundreds of libraries, various tutorials, and multiple platforms are at your disposal.
According to the 2017 Developer Survey by StackOverflow, JavaScript continues to occupy the leading position, being chosen by 61.2% of developers. Go showed a slightly worse result 4.3%. However, this means Go is already among the most promising languages of 2018, based even on simple Google search.
Currently, it’s still much easier to find a competent team of Node.js developers than put together one of Golang specialists. However, you can always take the IT outsourcing route and reach out to a reputable team with a strong portfolio of Go work.
When you deal with errors while using Go, you have to implement explicit error checking. This can make the process of finding the causes of errors difficult. Yet numerous developers argue that such an approach provides a cleaner application in general.
The Node.js approach with a throw/catch mechanism is more traditional and is preferred by many developers, although there are some problems with consistency at the end.
JavaScript is one of the most common coding languages nowadays. If you’re familiar with it, it will be no big deal to adapt to using Node.js programming. If you’re a newbie in JavaScript, you can leverage JavaScript’s vast community, which is always ready to share its expertise or give advice.
With Golang, you have to be ready to learn a new language, including co-routines, strict typing, pointers, and other programming concepts that may confuse you at first.
The latest trend of 2017 is blockchain technology. Many projects nowadays trumpet their blockchain-based application at every opportunity. And for good reason! The technology provides reliability, full control for the user, high-quality data, longevity, process integrity, transparency, and one more pack of buzzwords that define the viability of many startups today.
Theoretically, it’s possible to implement Node.js for developing a blockchain. However, building a blockchain in Go is a much easier solution and we highly recommend it.
In its essence, a blockchain is a distributed database of records. Go implies the implementation of an array and a map. The array keeps ordered hashes, and the map would keep hash -> blockpairs (maps are unordered). Then, we add blocks, and that’s it!
So, what should you choose: Node.js or Golang? The answer to this question depends on which type of development you need at the moment and how much you are going to scale the project.
For sure, Node.js has a broader community and a comprehensive documentation, yet, Go has a syntactically cleaner concurrency model, and it is better suited for scaling up.
Node.js, in its turn, can offer you a variety of packages, most of which are hard to re-implement in Go. In these case, it would be wiser to use Node.js.
If you feel overwhelmed by all this information or simply need some extra hands with Golang or Node.js expertise, then write a comment to initialise a conversation with other developers here.
For more information and to develop web application using Node JS, Hire Node Developer from us as we give you a high-quality product by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”. To develop custom web apps using Node JS, please visit our Hire Node Developer technology page.
Content Source:
Typescript’s 4.2 was just released recently. What awesome features does this release bring? What impact does it have in your daily life as a Developer? Should you immediately update?
Here, we will be going through all the most exciting new features. Here is a summary:
To get an editor with the latest Typescript version use Visual Studio Code Insiders. You can use a plugin alternatively for VS Code.
If you just want to have a play while reading the article you can use the Typescript Playground here. It is a fun and super easy tool to use.
Sometimes TypeScript just doesn’t resolve types properly. It may return the correct types but just not return the correct alias. The alias could be important and shouldn’t be lost along the way.
Let’s check this function:
export type BasicPrimitive = number | bigint; export function divisablePer0(value: BasicPrimitive) { if (value === 0) { return undefined; } return value; } type ReturnAlias = ReturnType<typeof divisablePer0>; // number | bigint | undefined
Notice that an undefined type needs to be added to the method return type as it’s returning undefined on some scenarios.
Before 4.2 the return of type divisablePer0 is number | bigint | undefined. That type is indeed correct but we have lost some information. The alias BasicPrimitive got lost in the process, which is a handy piece of information to have.
If we do the same on TypeScript 4.2 we get the correct alias:
export type BasicPrimitive = number | bigint; export function divisablePer0(value: BasicPrimitive) { if (value === 0) { return undefined; } return value; } type ReturnAlias = ReturnType<typeof divisablePer0>; // BasicPrimitive | undefined
Now the method divisablePer0 has the proper return type: BasicPrimitive | undefined. That makes your code more readable just by upgrading.
In the article about mapped types here we already looked at TypeScript Tuples. As a refresher, let’s revisit the example:
let arrayOptions: [string, boolean, boolean]; arrayOptions = ['config', true, true]; // works arrayOptions = [true, 'config', true]; // ^^^^^ ^^^^^^^^^ // Does not work: incompatible types function printConfig(data: string) { console.log(data); } printConfig(arrayOptions[0]);
However, we forgot to check whether Tuples can use optional elements. Let’s see what the previous example would look like:
let arrayOptions: [string, boolean?, boolean?]; arrayOptions = ['config', true, true]; // works arrayOptions = ['config', true]; // works too arrayOptions = ['config']; // works too function printConfig(data: string) { console.log(data); } printConfig(arrayOptions[0]);
Prior to 4.2 we could even use the spread operator to indicate a dynamic number of elements:
let arrayOptions: [string, ...boolean[]]; arrayOptions = ['config', true, true]; // works arrayOptions = ['config', true]; // works too arrayOptions = ['config']; // works too function printConfig(data: string) { console.log(data); } printConfig(arrayOptions[0]);
In this new TypeScript, version Tuples become more powerful. Previously, we could use the spread operator but we couldn’t define the last element types.
let arrayOptions: [string, ...boolean[], number]; arrayOptions = ['config', true, true, 12]; // works arrayOptions = ['config', true, 12]; // works too arrayOptions = ['config', 12]; // works too function printConfig(data: string) { console.log(data); } printConfig(arrayOptions[0]);
Note that something like this is invalid:
let arrayOptions: [string, ...boolean[], number?];
An optional element can’t follow a rest element. However, note that …boolean[] does accept an empty array, so that Tuple would accept [string, number] types.
Let’s see that in detail in the following example:
let arrayOptions: [string, ...boolean[], number]; arrayOptions = ['config', 12]; // works
The in operator is handy to know if a method or a property is in an object. However, in JavaScript, it will fail at runtime if it’s checked against a primitive.
Now, when you try to do this:
"method" in 23 // ^^ // Error: The right-hand side of an 'in' expression must not be a primitive.
You’ll get an error telling you explicitly what’s going on. As this operator has been made stricter this release might introduce breaking changes.
--noPropertyAccessFromIndexSignature
Yet another compiler configuration that’s always interesting. In TypeScript, you can access properties using the bracketed element syntax or the dot syntax like JavaScript. That accessor is possible when the key is a string.
interface Person { name: string; } const p: Person = { name: 'Max }; console.log(p.name) // Max console.log(p['name']) // Max
There’s a situation that has led to explicit property miss typing:
interface Person { name: string; [key: string]: string; } const p: Person = { name: 'Max }; console.log(p.namme) // undefined console.log(p['namme']) // undefined
Note how we are accessing the wrong property namme but because it fits the [key: string] implicit one, TypeScript won’t fail.
Enabling –noPropertyAccessFromIndexSignature will make TypeScript look for the explicit property when using the dotted syntax.
interface Person { name: string; [key: string]: string; } const p: Person = { name: 'Max' }; console.log(p.namme) // ^^^^^^^^ // Error console.log(p['namme']) // works fine
It’s not part of the strict configuration as this might not suit all developers and codebases.
Template literal types were introduced in 4.1 and here they got smarter. Previously, you couldn’t define a type template usage template literals.
type PropertyType = `get${string}`; function getProperty(property: PropertyType, target: any) { return target[property]; } getProperty('getName', {}); // works const propertyName = 'Name'; const x = `get${propertyName}`; getProperty(x, {}); // ^^^ // Error: Argument of type 'string' is not assignable to parameter of type '`get${string}`'
The core problem is that string expressions are resolving to type string which leads to this type of incompatibility:
const x = `get${propertyName}`; // string
However, with 4.2, template string expressions will always start out with the template literal type:
const x = `get${propertyName}`; // getName
TypeScript’s uncalled function checks apply within && and || expressions. Under –strictNullChecks you will check the following error now:
function isInvited(name: string) { return name !== 'Robert'; } function greet(name: string) { if (isInvited) { // ^^^^^^^^^ // Error: // This condition will always return true since the function is always defined. // Did you mean to call it instead? return `Welcome ${name}`; } return `Sorry you are not invited`; }
Sometimes it can be quite challenging to work out where the Typescript file definitions are pulled from. It’s sometimes a trial and error process.
It’s now possible to get a deeper insight into what’s going on, making the compiler more verbose, using the following:
tsc --explainFiles
Let’s see the result:
It is an awesome feature that will help you understand further Typescript’s internals.
For more information and to develop web application using TypeScript, Hire TypeScript Developer from us as we give you a high-quality product by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”. To develop custom web apps using TypeScript, please visit our technology page.
Content Source:
Performance optimization of frontend applications plays an important role in the application architecture. A higher-performing application will ensure an increase in user retention, improved user experience, and higher conversion rates.
According to Google, 53% of mobile phone users leave the site if it takes more than 3 seconds to load. At the same time, more than half of the pages tested are heavy in terms of bandwidth it utilizes to download the required assets. Don’t forget your frontend application performance directly affects its search ranking and conversion rates.
We use the Vue JS framework for our frontend applications. The challenge we had with our frontend application was with the landing page, which was taking around 3.8 secs to load with 4.2 MB of resources to be downloaded. As the response time was quite high, it was challenging to retain the users.
This article would share some of the implementation changes which we did to improve the performance of our frontend application.
Image compression is really important when optimizing frontend applications. Lighter images get downloaded faster and load with less time as compared to larger images. By compressing the images, we can make our site much lighter and hence results in slower page load times.
WebP is a modern image format that provides superior lossless and lossy compression for images on the web. Using WebP, webmasters and web developers can create smaller, richer images that make the web faster.
WebP lossless images are 26% smaller in size compared to PNGs. WebP lossy images are 25–34% smaller than comparable JPEG images at equivalent SSIM quality index.
WebP is supported by Chrome, Firefox, Edge, and Safari from version 14 and above. Please feel free to read more about WebP.
It is evident that download time has reduced on applying webp compression.
Synchronous components loading is the process of loading the components with the import statement which is the basic way of loading the component.
Components loaded with a static import statement will be added to the existing application bundle. If code splitting is not used then the application core will become huge in size, hence affects the overall performance of the application.
The below code snippet is an example of static component loading of store and locale components.
import store from '@common/src/store' import locale from '@common/src/util/locale'
Asynchronous components loading is the process where we load chunks of our application in a lazy manner. It ensures that components are only loaded when they are needed.
Lazy loading ensures that the bundle is split and serves only the needed parts so users are not waiting to download and parse the code that will not be used.
In the below code snippet, the image of YouTube is loaded asynchronously when it’s needed.
<template> <lazy-image :lazy-src="require('@/assets/images/icon/youtube.png')" alt="YouTube" draggable="false" /> </template> <template> <img v-if="lazySrc" ref="lazy" :src="defaultImage" :data-src="lazySrc" :alt="alt" class="lazyImage" @error="handleError"> <img v-else :src="defaultImage"> </template>
To dynamically load a component, we declare a const and append an arrow function followed by the default static import statement.
We can also add a web pack magic comment. The comment will tell webpack to assign our chunk the name we provided otherwise webpack will auto-generate a name by itself.
const MainBanner = () => import(/* webpackChunkName: "c-main-banner" */ '@/components/MainBanner')
If we go to our developer tools and open the Network tab we can see that our chunk has been assigned the name we provided in the webpack’s chunk name comment.
According to Dev Mozilla, code splitting is the process of splitting the application code into various bundles or components which can then be loaded on-demand or in parallel.
As an application is used extensively, it will have a lot of changes and new requirements with time it would have increased in terms of complexity, CSS and JavaScripts files or bundles grow in size, and also don’t forget the third-party libraries which we use.
We don’t have much control in terms of third party libraries downloads as they are required for our application to work. At least we should make sure our code is split into multiple smaller files. The features required at page load can be downloaded quickly with smaller files and with additional scripts being lazy-loaded after the page or application is interactive, thus improves the performance.
we have seen some of the frontend developers’ arguments, it will increase the number of files but the code remains the same. We completely agree with them but the main thing over here is the amount of code needed during the initial load can be reduced.
Code splitting is a feature supported by bundlers like Webpack and Browserify which can create multiple bundles that can be dynamically loaded at runtime or we can do the old school way where we can split the code required for individual vue files can be separated and loaded on demand.
Basically, third-party requests can slow down page loads for several reasons like slow networks, long DNS lookups, multiple redirects, slow servers, poor performing CDN, etc.
As third-party resources (e.g., Facebook or Twitter, or MoEngage) do not originate from your domain, their behavior is sometimes difficult to predict and they may negatively affect page experience for your users.
Using preconnect helps the browser prioritize important third-party connections and speeds up your page load as third-party requests may take a long time to process. Establishing early connections to these third-party origins by using a resource hint like preconnect can help reduce the time delay usually associated with these requests.
preconnect is useful when you know the origin of the third-party request but don’t know what the actual resource itself is. It informs your browser that the page intends to connect to another origin and that you would like this process to start as soon as possible. The browser closes any connection that isn’t used within 15 seconds, so preconnect should only be used for the most critical third-party domains.
As a part of best practices, we need to make sure we don’t have any commented code in JS or CSS files. It’s commented because we don’t want to use them, so better get rid of them as the commented code contributes to an increase in the size of the file.
As part of frontend application development, we might use some CSS frameworks but you will only use a small set of the framework styles, and a lot of unused CSS styles will be included.
According to PurgeCSS, it’s a tool to remove unused CSS. It can be part of your development workflow. PurgeCSS analyzes your content and your CSS files then it matches the selectors used in your files with the ones in your content files. It removes unused selectors from your CSS, resulting in smaller CSS files.
Also while importing anything from 3rd party libraries to run the application, we can use the tree-shaking mechanism that can avoid the unused CSS and JS code included in our 3rd party bundles. To analyze this kind of unwanted JS and CSS items from 3rd party libraries there is a tool called webpack bundle analyzer.
For more information and to develop website using Vue.js, Hire Vue.js Developer from us as we give you a high-quality product by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”. To develop Website using Vue.js, please visit our technology page.
Content Source:
Angular is one of the most popularly used frameworks with best-designed practices and tools for app development companies. Angular encourages the developers to use components to split the user interface into reusable and different pieces. There are many popular Angular component libraries available in the market that can help the Angular development companies create a robust application for their clients.
In this blog, we will go through some of the most popular Angular component libraries that one can use in 2021.
Angular components are created by using Angular and TypeScript. These components are implemented with Google’s material design. It also enables the Angular developers to split the UI into various pieces. Some of the fantastic aspects that make developers use Angular component library are –
The components in Angular are created in a similar manner to the modules. It properly depends on the developers on which to use and when to use it.
The Angular component libraries are very responsive in nature, making it crucial for website designing & development.
Angular component libraries are user-friendly and are built in a lightweight manner. It is effortless to learn and use for any Angular developer.
NGX Bootstrap is one of the most popular open-source Angular components. It gives vastness in bootstrap capabilities and helps developers utilize it on the next Angular app development project for their clients.
NGX Bootstrap has scored 5.2k stars by the GitHub community.
Features of NGX Bootstrap
AngularJS developers of the ngx-bootstrap team put effort into creating ngx-bootstrap modular, which can help the development companies implement their own whatnot, styles, and templates. All the Angular components are designed by keeping adaptivity and extensibility in mind. They can work efficiently on Desktop and Mobile platforms with the same performance level.
NGX Bootstrap offers well-written documentation that can significantly help AngularJS developers ease their work to improve software quality. The team at ngx-bootstrap provides easy to understand and complete documentation.
NGX Bootstrap has incorporated a set of guidelines that can help in enhancing the code readability and maintainability.
Components of NGX Bootstrap
The NG Bootstrap is a popular Angular development bootstrap component. It has around 7.6k stars on GitHub. When working with NG Bootstrap, there is no need to use third-party JS dependencies. It is also used for high-testing coverage.
Features of NG Bootstrap
The NG bootstrap offers widgets like modal, tablet, rating, and tooltip.
The NG bootstrap offers unique widgets and gives complete access to them. The NG bootstrap team uses HTML elements and attributes that can help AngularJS app development companies create robust applications. This library also provides focus management work and keyboard navigation.
The team at NG bootstrap tests the code with 100% convergence and reviews all the changes.
There is a bootstrap/angular-UI team created for developing widgets and also has many core Angular contributors.
Teradata is a UI platform created on Angular and Angular-Material. It comes with solutions that are a combination of the comprehensive web framework and proven design language. It even gives a quick start to the AngularJS developers in creating a modern web application. Teradata Covalent scores 2.2k GitHub scores.
Angular Command-line interface enables the developers to work with Angular-material and create, deploy, & test the application. It offers simplified stepper, file upload, user interface layout, custom web components, expansion panels, and more testing tools for both end-to-end tests and unit tests.
Features of Teradata
Components of Teradata
Nebular is an Angular 8 UI library that focuses on the brand’s adaptability and design. It has four visual themes that have support for custom CSS properties. This library is based on the Eva Design System. Nebular holds few security modules and around 40+ UI components. Some of these components are stated below. Besides this, it also has 6.7k starts in the GitHub community.
Features of Nebular
Components of Nebular
Clarity is an open-source Angular component that acts as a bridge between the HTML framework and Angular components. Clarity is the best platform for both software developers and designers.
Clarity library offers implemented data-bound components and a well-structured option to the Angular development service providers. It also owns 6.1k GitHub stars.
Features of Clarity
The Clarity team offers an understanding and easy-to-use platform that helps the developers solve a vast array of challenges.
It is the most reliable platform as it provides a high bar of quality.
Clarity is designed in a way that makes communication and collaboration of expertise very easy and rapid.
With new technologies and techniques coming into the picture, Clarity keeps on evolving.
Components of Clarity
Onsen UI is a component library that is one of the most used by the Angular development service company for creating mobile web apps for Android and iOS using JavaScript. It has 8.2 stars in the GitHub community.
Onsen UI is a library that comes with development tools and powerful CLI with Monaca. The main benefits of Onsen UI are its UI components that can easily be plugged into the mobile application.
Features of Onsen UI
Monaca is a cross-platform used for creating hybrid apps, and Onsen UI performs very well with it.
It provides ready-to-use components like toolbar, forms, side menu, and much more to give a native look. Besides this, Onsen UI also supports Android and iOS material design, making the appearance and style of the application look according to the selected platform.
The new version of Onsen UI is now enabled to provide optimized performance without slowing up the process.
Despite being a powerful tool to develop a mobile application, it is straightforward to learn and use.
Onsen UI allows the developer to work with technologies like CSS, HTML, and JavaScript. These are the technologies that they might already know, so it would take zero-time to get started with the tool.
Components of Onsen UI
For more information and to develop web application using Angular, Hire Angular Developer from us as we give you a high-quality product by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”. To develop web application using Angular, please visit our technology page.
Content Source:
Angular Version 11 release has updates across the platform including the framework, the CLI and components. Let’s dive in!
To make your apps even faster by speeding up their first contentful paint, we’re introducing automatic font inlining. During compile time Angular CLI will download and inline fonts that are being used and linked in the application. We enable this by default in apps built with version 11. All you need to do to take advantage of this optimization is update your app!
In Angular v9 we introduced Component Test Harnesses. They provide a robust and legible API surface to help with testing Angular Material components. It gives developers a way to interact with Angular Material components using the supported API during testing.
Releasing with version 11, we have harnesses for all of the components! Now developers can create more robust test suites.
We’ve also included performance improvements and new APIs. The parallel function makes working with asynchronous actions in your tests easier by allowing developers to run multiple asynchronous interactions with components in parallel. The manualChangeDetection function gives developers access to finer grained control of change detection by disabling automatic change detection in unit tests.
For more details and examples of these APIs and other new features, be sure to check out the documentation for Angular Material Test Harnesses!
We’ve made changes to the builder phase reporting to make it even more helpful during development. We are bringing in new CLI output updates to make logs and reports easier to read.
Screenshot of angular CLI output nicely formatted into columns.
The Angular Language Service provides helpful tools to make development with Angular productive and fun. The current version of the language service is based on View Engine and today we’re giving a sneak peek of the Ivy-based language service. The updated language service provides a more powerful and accurate experience for developers.
Now, the language service will be able to correctly infer generic types in templates the same way the TypeScript compiler does. For example, in the screenshot below we’re able to infer that the iterable is of type string.
Screenshot of intellisense style insights in Angular templates.
This powerful new update is still in development but we wanted to share an update as we keep preparing it for a full release in an upcoming version.
Angular has offered support for HMR but enabling it required configuration and code changes making it less than ideal to quickly include in Angular projects. In version 11
we’ve updated the CLI to allow enabling HMR when starting an application with ng serve. To get started, run the following command:
ng serve --hmr
After the local server starts the console will display a message confirming that HMR is active:
NOTICE: Hot Module Replacement (HMR) is enabled for the dev server.
Now during development the latest changes to components, templates and styles will be instantly updated into the running application. All without requiring a full page refresh. Data typed into forms are preserved as well as scroll position providing a boost to developer productivity.
We’re bringing a faster development and build cycle by making updates to some key areas.
Now, teams can opt-in to webpack v5. Currently, you could experiment with module federation. In the future, webpack v5 will clear the path for:
Support is experimental and under development so we don’t recommend opting in for production uses.
Want to try out webpack 5? To enable it in your project, add the following section to your package.json file:
"resolutions": { "webpack": "5.4.0" }
Currently, you’ll need to use yarn to test this as npm does not yet support the resolutions property.
In previous versions of Angular, we’ve shipped a default implementation for linting (TSLint). Now, TSLint is deprecated by the project creators who recommend migration to ESLint. James Henry together with other folks from the open-source community developed a third-party solution and migration path via typescript-eslint, angular-eslint and tslint-to-eslint-config! We’ve been collaborating closely to ensure a smooth transition of Angular developers to the supported linting stack.
We’re deprecating the use of TSLint and Codelyzer in version 11. This means that in future versions the default implementation for linting Angular projects will not be available.
Head over to the official project page for a guide to incorporate angular-eslint in a project and migrate from TSLint.
In this update we’re removing support for IE9/IE10 and IE mobile. IE11 is the only version of IE still supported by Angular. We’ve also removed deprecated APIs and added a few to the deprecation list. Be sure to check this out to make sure you are using the latest APIs and following our recommended best practices.
We’ve also updated the roadmap to keep you posted on our current priorities. Some of the announcements in this post are updates on in-progress projects from the roadmap. This reflects our approach to incrementally rollout larger efforts and allows developers to provide early feedback that we can incorporate it into the final release.
We collaborated with Lukas Ruebbelke from the Angular community on updating the content of some of the projects to better reflect the value they provide to developers.
For more information and to develop web application using Angular, Hire Angular Developer from us as we give you a high-quality product by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”. To develop web application using Angular, please visit our technology page.
Content Source:
The function-like HTML segment refers to a block of HTML with the ability to accept context variables (in other words, parameters). A typical Angular component has two major parts of logic, a HTML template and a Typescript class. The capability to utilize this kind of function-like HTML segment is essential for a good shared component. It is because a shared component with only a fixed HTML template is very difficult to fit all the needs among all different use cases. Trying to satisfy all potential use cases with a single and fixed HTML template will usually end up with a large template with lots of conditional statements (like *ngIf), which is painful to read and maintain.
Here we would like to explain with an example, about how we can utilize TemplateRef to define function-like HTML segments for communication between templates, which is a good solution to deal with the large template problem.
Assume that there is a shared component DataListComponent, which takes an array of data and displays them in the view:
export interface DataTableRow { dataType: string; value: any; } @Component({ selector: 'data-list', template: ` <div *ngFor="let row of data" [ngSwitch]="row.dataType"> <div *ngSwitchCase="'string'">{{row.value}}</div> <div *ngSwitchCase="'number'"># {{row.value | number}}</div> <div *ngSwitchCase="'date'">{{row.value | date}}</div> </div> ` }) export class DataListComponent { @Input() data: DataTableRow[] = []; }
It understands only three types of data now, which are string, number and date. When we want to add more types to it, the easiest way is to simply add more switch cases. It is totally fine when such new types are generic enough that have universal representations. Yet, for data that is depending on the users, adding more switch cases can make the code very dirty.
Say we want to add the new type boolean which displays true/false in FirstComponent, yes/no in SecondComponent. If we simply go for the more-switch-cases solution, it may have something like this:
<div *ngSwitchCase="'boolean-firstComponent'"> {{ row.value ? 'true' : 'false }} </div> <div *ngSwitchCase="'boolean-secondComponent'"> {{ row.value ? 'yes' : 'no}} </div>
This approach is bad as the shared component now contains component-specific logic. Besides, this block of code is going to expand really fast when there are more new use cases in the future, which will soon become a disaster. Ideally, we want to pass HTML segments from the parents, so that we can keep those specific logic away from the shared component.
@Component({ template: ` <data-list [data]="data"> <!-- component specific logic to display true/false --> </data-list> `, ... }) export class FirstComponent {...} @Component({ template: ` <data-list [data]="data"> <!-- component specific logic to display yes/no --> </data-list> `, ... }) export class SecondComponent {...}
The logic behind is actually very straight forward. First, we need to define templates with context in the user components:
@Component({ template: ` <data-list [data]="data"> <ng-template let-value="value"> {{value ? 'true' : 'false'}} </ng-template> </data-list> `, ... }) export class FirstComponent {...} @Component({ template: ` <data-list [data]="data"> <ng-template let-value="value"> {{value ? 'yes' : 'no'}} </ng-template> </data-list> `, ... }) export class SecondComponent {...}
Next, we add the logic to read and present the template segment inside the shared component:
@Component({ selector: 'data-list', template: ` <div *ngFor="let row of data" [ngSwitch]="row.dataType"> <div *ngSwitchCase="'string'">{{row.value}}</div> <div *ngSwitchCase="'number'"># {{row.value | number}}</div> <div *ngSwitchCase="'date'">{{row.value | date}}</div> <div *ngSwitchCase="'boolean'"> <ng-container *ngTemplateOutlet="rowTemplate; context:{ value: row.value }"></ng-container> </div> </div> ` }) export class DataListComponent { @Input() data: DataTableRow[] = []; @ContentChild(TemplateRef) rowTemplate: TemplateRef<any>; }
Now we have a shared component which is capable to interpret a HTML segment from the outside. Yet, it is still not ideal. What if we have more than one templates?
This one is more tricky. Although TemplateRef is capable of parsing context, it doesn’t have a name or ID that we can rely on to distinguish multiple templates from each other programmatically. As a result, we need to add a wrapper component on top of it when we have more than one templates, so that we can add identifiers.
@Component({ selector: 'custom-row-definition', template: '' }) export class CustomRowDefinitionComponent { @Input() dataType: string; @ContentChild(TemplateRef) rowTemplate: TemplateRef<any>; }
Instead of directly retrieving the TemplateRef in the shared component, we retrieve the wrapper:
@Component({ selector: 'data-list', template: ` <div *ngFor="let row of data" [ngSwitch]="row.dataType"> <div *ngSwitchCase="'string'">String: {{row.value}}</div> <div *ngSwitchCase="'number'"># {{row.value | number}}</div> <div *ngSwitchCase="'date'">{{row.value | date}}</div> <ng-container *ngFor="let def of customRowDefinitions"> <ng-container *ngSwitchCase="def.dataType"> <ng-container *ngTemplateOutlet="def.rowTemplate; context:{ value: row.value }"></ng-container> </ng-container> </ng-container> </div> ` }) export class DataListComponent { @Input() data: DataTableRow[] = []; @ContentChildren(CustomRowDefinitionComponent) customRowDefinitions: QueryList<CustomRowDefinitionComponent>; }
(Having multiple ng-container together with structural directives may cause performance issue potentially, but it is not the main point of this article, so we leave it there for simplicity.)
In this example, we use the dataType property inside the wrapper as identifiers for the templates. As a result, we can now define multiple templates with different dataType.
@Component({ selector: 'app-root', template: ` <data-list [data]="data"> <custom-row-definition dataType="array"> <ng-template let-value="value"> {{value.join(' - ')}} </ng-template> </custom-row-definition> <custom-row-definition dataType="money"> <ng-template let-value="value"> $ {{value | number}} </ng-template> </custom-row-definition> </data-list> ` }) export class AppComponent { data: DataTableRow[] = [ { dataType: 'string', value: 'Row 1' }, { dataType: 'number', value: 500 }, { dataType: 'date', value: new Date() }, { dataType: 'array', value: [1, 2, 3, 4] }, { dataType: 'money', value: 200 } ] }
Some may ask why don’t we just use ng-content with name to project the content from the outside? The major difference is the capability to have context (parameters). ng-content is like a function without parameters, which cannot achieve real mutual communication between templates. It is like a one-way channel to merge some HTML segments from the outside, but no real interaction with the template inside. It won’t be able to achieve use cases like the example above.
For more information and to develop web application using Angular, Hire Angular Developer from us as we give you a high-quality product by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”. To develop web application using Angular, please visit our technology page.
Content Source:
57 Sherway St,
Stoney Creek, ON
L8J 0J3
606, Suvas Scala,
S P Ring Road, Nikol,
Ahmedabad 380049
1131 Baycrest Drive,
Wesley Chapel,
FL 33544
57 Sherway St,
Stoney Creek, ON
L8J 0J3
606, Suvas Scala,
S P Ring Road, Nikol,
Ahmedabad 380049
1131 Baycrest Drive,
Wesley Chapel,
FL 33544
© 2024 — HK Infosoft. All Rights Reserved.
© 2024 — HK Infosoft. All Rights Reserved.
T&C | Privacy Policy | Sitemap