Our Global Presence
Canada
57 Sherway St,
Stoney Creek, ON
L8J 0J3
India
606, Suvas Scala,
S P Ring Road, Nikol,
Ahmedabad 380049
USA
1131 Baycrest Drive,
Wesley Chapel,
FL 33544
You know those tedious tasks you have to do at work: Updating configuration files, copying and pasting files, updating Jira tickets.
The definition of a reskin at the company was using the same game mechanics, screens and positioning of elements, but changing the visual aesthetics such as color and assets. So in the context of a simple game like ‘Rock Paper Scissors’, we would create a template with basic assets like below.
But when we create a reskin of this, we would use different assets and the game would still work. If you look at games like Candy Crush or Angry Birds, you’ll find that they have many varieties of the same game. Usually Halloween, Christmas or Easter releases. From a business perspective it makes perfect sense. Now… back to our implementation. Each of our games would share the same bundled JavaScript file, and load in a JSON file that had different content and asset paths.
We had stacked daily schedules, and our first thought was, ‘a lot of this could be automated.’ Whenever we created a new game, we had to carry out these steps:
Now to us, this felt more administrative than actual development work. We were exposed to Bash scripting in a previous role and jumped on it to create a few scripts to reduce the effort involved. One of the scripts updated the templates and created a new branch, the other script did a commit and merged the project to Staging and Production environments.
Setting up a project would take us three-to-ten minutes to set up manually. Maybe five to ten minutes for deployment. Depending on the complexity of the game, it could take anything from ten minutes to half a day. The scripts helped, but a lot of time was still spent on updating the content or trying to chase down missing information.
Writing code to save time was not enough. We were thinking of a better approach to our workflow so that we could utilize the scripts more. Move the content from out of the word documents, and into Jira tickets, breaking it out into the relevant custom fields. The Designers, instead of sending a link to where the assets exist on the public drive, it would be more practical to set up a content delivery network (CDN) repository with a Staging and Production URL to the assets.
Things like this can take a while to enforce, but our process did improve over time. we did some research on the API of Jira, our project management tool, and did some requests to the Jira tickets we were working on. We were pulling back a lot of valuable data. So valuable that we made the decision to integrate it into our Bash scripts to read values from Jira tickets, to also post comments and tag stakeholders when we finished.
The Bash scripts were good, but if someone was working on a Windows machine, they couldn’t be run. After doing some digging, we made the decision to use JavaScript to wrap the
whole process into a bespoke build tool. we called the tool Mason, and it would change everything.
When you use Git, we assume you do in the terminal, you will notice it has a very friendly command line interface. If you misspell or type a command incorrectly, it will politely make a suggestion on what it thinks you were trying to type. A library called commander applies the same behavior, and this was one of many libraries we used.
Consider the simplified code example below. It’s bootstrapping a Command Line Interface (CLI) application.
#! /usr/bin/env node const mason = require('commander'); const { version } = require('./package.json'); const console = require('console'); // commands const create = require('./commands/create'); const setup = require('./commands/setup'); mason .version(version); mason .command('setup [env]') .description('run setup commands for all envs') .action(setup); mason .command('create <ticketId>') .description('creates a new game') .action(create); mason .command('*') .action(() => { mason.help(); }); mason.parse(process.argv); if (!mason.args.length) { mason.help(); }
With the use of npm, you can run a link from your package.json and it will create a global alias.
... "bin": { "mason": "src/mason.js" }, ...
When we run npm link in the root of the project.
npm link
It will provide us with a command we can call, called mason. So whenever we call mason in our terminal, it will run that mason.js script. All tasks fall under one umbrella command called mason, and we used it to build games every day. The time we saved was… incredible.
You can see below in a hypothetical example of what we did back then, that we pass a Jira ticket number to the command as an argument. This would curl the Jira API, and fetch all the information we needed to update the game. It would then proceed to build and deploy the project. we would then post a comment and tag the stakeholder & designer to let them know it was done.
$ mason create GS-234 ... calling Jira API ... OK! got values! ... creating a new branch from master called 'GS-234' ... updating templates repository ... copying from template 'pick-from-three' ... injecting values into config JSON ... building project ... deploying game ... Perfect! Here is the live link http://www.fake-studio.com/game/fire-water-earth ... Posted comment 'Hey [~ben.smith], this has been released. Does the design look okay? [~jamie.lane]' on Jira.
All done with a few key strokes!
We were so happy with the whole project.
The first part is a collection of recipes, or instructional building blocks that behave as individual global commands. These can be used as you go about your day, and can be called at any time to speed up your workflow or for pure convenience.
The second part is a walk-through of creating a cross-platform build tool from the ground up. Each script that achieves a certain task will be its own command, with a main umbrella command usually the name of your project encapsulating them all.
We understand that circumstances and flows are different in every business, but you should be able to find something, even if it’s small, that can make your day a little easier at the office.
For more information and to develop web application using Node JS, Hire Node Developer from us as we give you a high-quality product by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”. To develop custom web apps using Node JS, please visit our Hire Node Developer technology page.
Content Source:
Angular Version 11 release has updates across the platform including the framework, the CLI and components. Let’s dive in!
To make your apps even faster by speeding up their first contentful paint, we’re introducing automatic font inlining. During compile time Angular CLI will download and inline fonts that are being used and linked in the application. We enable this by default in apps built with version 11. All you need to do to take advantage of this optimization is update your app!
In Angular v9 we introduced Component Test Harnesses. They provide a robust and legible API surface to help with testing Angular Material components. It gives developers a way to interact with Angular Material components using the supported API during testing.
Releasing with version 11, we have harnesses for all of the components! Now developers can create more robust test suites.
We’ve also included performance improvements and new APIs. The parallel function makes working with asynchronous actions in your tests easier by allowing developers to run multiple asynchronous interactions with components in parallel. The manualChangeDetection function gives developers access to finer grained control of change detection by disabling automatic change detection in unit tests.
For more details and examples of these APIs and other new features, be sure to check out the documentation for Angular Material Test Harnesses!
We’ve made changes to the builder phase reporting to make it even more helpful during development. We are bringing in new CLI output updates to make logs and reports easier to read.
Screenshot of angular CLI output nicely formatted into columns.
The Angular Language Service provides helpful tools to make development with Angular productive and fun. The current version of the language service is based on View Engine and today we’re giving a sneak peek of the Ivy-based language service. The updated language service provides a more powerful and accurate experience for developers.
Now, the language service will be able to correctly infer generic types in templates the same way the TypeScript compiler does. For example, in the screenshot below we’re able to infer that the iterable is of type string.
Screenshot of intellisense style insights in Angular templates.
This powerful new update is still in development but we wanted to share an update as we keep preparing it for a full release in an upcoming version.
Angular has offered support for HMR but enabling it required configuration and code changes making it less than ideal to quickly include in Angular projects. In version 11
we’ve updated the CLI to allow enabling HMR when starting an application with ng serve. To get started, run the following command:
ng serve --hmr
After the local server starts the console will display a message confirming that HMR is active:
NOTICE: Hot Module Replacement (HMR) is enabled for the dev server.
Now during development the latest changes to components, templates and styles will be instantly updated into the running application. All without requiring a full page refresh. Data typed into forms are preserved as well as scroll position providing a boost to developer productivity.
We’re bringing a faster development and build cycle by making updates to some key areas.
Now, teams can opt-in to webpack v5. Currently, you could experiment with module federation. In the future, webpack v5 will clear the path for:
Support is experimental and under development so we don’t recommend opting in for production uses.
Want to try out webpack 5? To enable it in your project, add the following section to your package.json file:
"resolutions": { "webpack": "5.4.0" }
Currently, you’ll need to use yarn to test this as npm does not yet support the resolutions property.
In previous versions of Angular, we’ve shipped a default implementation for linting (TSLint). Now, TSLint is deprecated by the project creators who recommend migration to ESLint. James Henry together with other folks from the open-source community developed a third-party solution and migration path via typescript-eslint, angular-eslint and tslint-to-eslint-config! We’ve been collaborating closely to ensure a smooth transition of Angular developers to the supported linting stack.
We’re deprecating the use of TSLint and Codelyzer in version 11. This means that in future versions the default implementation for linting Angular projects will not be available.
Head over to the official project page for a guide to incorporate angular-eslint in a project and migrate from TSLint.
In this update we’re removing support for IE9/IE10 and IE mobile. IE11 is the only version of IE still supported by Angular. We’ve also removed deprecated APIs and added a few to the deprecation list. Be sure to check this out to make sure you are using the latest APIs and following our recommended best practices.
We’ve also updated the roadmap to keep you posted on our current priorities. Some of the announcements in this post are updates on in-progress projects from the roadmap. This reflects our approach to incrementally rollout larger efforts and allows developers to provide early feedback that we can incorporate it into the final release.
We collaborated with Lukas Ruebbelke from the Angular community on updating the content of some of the projects to better reflect the value they provide to developers.
For more information and to develop web application using Angular, Hire Angular Developer from us as we give you a high-quality product by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”. To develop web application using Angular, please visit our technology page.
Content Source:
Media Library can associate files with Eloquent models. You can, for instance, associate images with a blog post model. In your Blade view, you can retrieve URLs to the associated images. It can handle multiple collections, work with multiple filesytems, create zips on the fly to download multiple files, use a customized directory structure, save bandwidth using responsive images and much more.
Before exploring Media Library Pro, let’s first explained why we built it in the first place. Here’s how a traditional upload form might look like. It uses a regular input of type file.
<form method="POST" enctype="multipart/form-data"> <x-grid> @csrf <x-field label="name"> <x-input id="name" name="name" placeholder="Your first name" /> </x-field> <x-field label="file"> <input type="file" name="file"> @error('file') {{ $message }} @enderror </x-field> <x-button dusk="submit">Submit</x-button> </x-grid> </form>
There are two big problems with this standard upload element.
First, the upload process only starts when the form is submitted. For small files in small forms, this might not be a problem. But imagine you’re uploading a multi MB file in a form. When submitting the form, you now have to wait for the upload to complete before seeing the submission results.
The second problem is something that has bothered us for a long, long time. Imagine that the input field is part of a form of which some fields are required. You’re selecting a file, submitting the form, leaving some of the required fields empty. You get redirected back to the form where error messages are now displayed. Your previous file selection is gone, and you need to select the file again.
Media Library Pro is a paid add-on package that offers Blade, Vue, and React components to upload files to your application. It ships with two components. The first one is the attachment component. It is meant to be used on a public-facing page where you want users to upload one or multiple files.
The second one is called the collection component. This one can manage the files in an existing collection. It is meant to be used in the admin part of your app.
Both of these components are available as Vue, React and Blade components. Under the hood, the Blade components are powered by Caleb’s excellent Livewire package.
These components are straightforward to install and are documented in great detail.
Let’s take a look at both the `Attachment` and `Collection` component. In the remainder of the blog post, we’ll use the Blade version of the examples, but rest assured that everything shown can also be done with the Vue and React counterparts.
To get started with the Attachment Blade component you’ll have to use x-media-library-attachment in your view.
<form method="POST"> @csrf <input id="name" name="name"> <x-media-library-attachment name="avatar"/> <button type="submit">Submit</button> </form>
Here’s how it looks like after we’ve selected a file but before submitting the form.
The x-media-library-attachment has taken care of the upload. The file is now stored as a temporary upload. In case there are validation errors when submitting the form, the x-media-library-attachment will display the temporary upload when you get redirected back to the form. There’s no need for the user to upload the file again.
Here’s the form request used to validate the uploaded.
namespace App\Http\Requests\Blade; use Illuminate\Foundation\Http\FormRequest; use Spatie\MediaLibraryPro\Rules\Concerns\ValidatesMedia; class StoreBladeAttachmentRequest extends FormRequest { use ValidatesMedia; public function rules() { return [ 'name' => 'required', 'media' => ['required', $this->validateSingleMedia() ->maxItemSizeInKb(3000), ], ]; } }
By applying the ValidatesMedia trait, you get access to the validateSingleMedia, which allows you to validate the upload. You can chain on many validation methods, which are documented here.
In your controller, you can associate the upload file to any model you’d like.
$formSubmission ->addFromMediaLibraryRequest($request->media) ->toMediaCollection('images');
And that is all you need to do!
The attachment component can be used to handle multiple uploads as well. In this video, you’ll see how that is done.
You can manage the entire contents of a media library collection with x-media-library-collection component. This
component is intended to use in admin sections.
Here is an example where we will administer an images collection of a $formSubmission model.
<form method="POST"> @csrf <x-field label="name"> <x-input id="name" name="name" autocomplete="off" placeholder="Your name" value="{{ old('name', $formSubmission->name) }}"/> </x-field> <x-field label="Images"> <x-media-library-collection name="images" :model="$formSubmission" collection="images" max-items="3" rules="mimes:png,jpeg" /> </x-field> <x-button dusk="submit" type="submit">Submit</x-button> </form>
Here’s how that component looks like:
This component will display the contents of the entire collection. Files can be added, removed, updated, and reordered.
To validate the response of the form, a form request like this one can be used:
namespace App\Http\Requests\Blade; use Illuminate\Foundation\Http\FormRequest; use Spatie\MediaLibraryPro\Rules\Concerns\ValidatesMedia; class StoreBladeCollectionRequest extends FormRequest { use ValidatesMedia; public function rules() { return [ 'name' => 'required', 'images' => [$this->validateMultipleMedia() ->maxItems(3) ->itemName('required'), ], ]; } }
Again, you need to ValidatesMedia trait. This time the validateMultipleMedia should be used. You can chain on the other validation methods, which are documented here.
In the controller, you can associate the media in the collection component with your model using the syncFromMediaLibraryRequest method.
Here’s the relevant code in the controller of the demo app.
$formSubmission ->syncFromMediaLibraryRequest($request->images) ->toMediaCollection('images');
When using the collection component, you probably want to add some extra fields to be displayed. We’ve made this a straightforward thing to do.
In the screenshot below, we added the Extra field field.
You can achieve this by passing a blade view to the fields-view prop of the x-media-library-collection.
<x-media-library-collection name="images" :model="$formSubmission" collection="images" max-items="3" rules="mimes:png,jpeg" fields-view="uploads.blade.partials.custom-properties" />
In that custom-properties view, you can put anything that should be displayed in the right half of the collection component.
Here’s the content of that custom-propertiesview.
@include('media-library::livewire.partials.collection.fields') <div class="media-library-field"> <label class="media-library-label">Extra field</label> <input dusk="media-library-extra-field" class="media-library-input" type="text" {{ $mediaItem->customPropertyAttributes('extra_field') }} /> @error($mediaItem->customPropertyErrorName('extra_field')) <span class="media-library-text-error"> {{ $message }} </span> @enderror </div>
In the form request, you can use the customProperty to validate any extra custom attributes. The second argument of the function can take any validator in Laravel.
namespace App\Http\Requests\Blade; use Illuminate\Foundation\Http\FormRequest; use Spatie\MediaLibraryPro\Rules\Concerns\ValidatesMedia; class StoreBladeCollectionCustomPropertyRequest extends FormRequest { use ValidatesMedia; public function rules() { return [ 'name' => 'required', 'images' => [$this->validateMultipleMedia() ->maxItems(3) ->itemName('required|max:30') ->customProperty('extra_field', 'required|max:30'), ], ]; } }
In the controller where you process the form submission, you should use the withCustomProperties method to whitelist any extra attributes that you want to sync with your media.
$formSubmission ->syncFromMediaLibraryRequest($request->images) ->withCustomProperties('extra_field') ->toMediaCollection('images');
Customizing the look and feel
By default, both the Attachment and Collection components already look good. Probably you’d like to adapt them so they match the look of your app.
Luckily, this is easy to do. The styles that ship with Media Library Pro can be used by importing or linking dist/styles.css. The styles were built with a default tailwind.config.js.
You can customize the styles by importing src/styles.css and run every @apply rule through your own tailwind.config.js
/* app.css */ @tailwind base; @tailwind components; @tailwind utilities; @import "src/styles.css"; …
To achieve that behavior where uploaded files are preserved when a form validation error occurs, we use temporary uploads.
Inside the private spatie/laravel-medialibrary-pro repo, there are a lot of tests to make sure the back end integration and the Vue, React, and Blade front end components are working as expected.
We also wanted to have browser tests that ensure that front end components work perfectly with the back end and vice versa. That’s why we added Dusk tests in our demo application. You can see them here.
Let’s take a look at one of them:
/** * @test * * @dataProvider routeNames */ public function it_can_handle_a_single_upload(string $routeName) { $this->browse(function (Browser $browser) use ($routeName) { $browser ->visit(route($routeName)) ->type('name', 'My name') ->attach('@main-uploader', $this->getStubPath('space.png')) ->waitForText('Remove') ->waitUntilMissing('.media-library-progress-wrap.media-library-progress-wrap-loading') ->press('@submit') ->assertSee('Your form has been submitted'); $this->assertCount(1, FormSubmission::get()); $this->assertEquals('space.png', FormSubmission::first()->getFirstMedia('images')->file_name); }); }
This test will upload a file and make sure that the file is associated with a model after the form is submitted.
A thing to note here is that @dataProvider attribute. This will make PHPUnit run the test for each result returned by the routeNames function defined in the same file.
public function routeNames(): array { return [ ['vue.attachment'], ['react.attachment'], ['blade.attachment'], ]; }
You can see that in combination with the routeNames function, the it_can_handle_a_single_upload will run for the vue.attachment, react.attachment and blade.attachment routes. Visiting these routes will display the form that will use the Vue, React, or Blade component, respectively. So, this one test covers a lot of logic. It makes sure that the component work using any technology. This gives us a lot of confidence that all of the components are working correctly.
For more information and to develop web application using Laravel, Hire Laravel Developer from us as we give you a high-quality product by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”. To develop custom web apps using React JS, please visit our technology page.
Content Source:
The function-like HTML segment refers to a block of HTML with the ability to accept context variables (in other words, parameters). A typical Angular component has two major parts of logic, a HTML template and a Typescript class. The capability to utilize this kind of function-like HTML segment is essential for a good shared component. It is because a shared component with only a fixed HTML template is very difficult to fit all the needs among all different use cases. Trying to satisfy all potential use cases with a single and fixed HTML template will usually end up with a large template with lots of conditional statements (like *ngIf), which is painful to read and maintain.
Here we would like to explain with an example, about how we can utilize TemplateRef to define function-like HTML segments for communication between templates, which is a good solution to deal with the large template problem.
Assume that there is a shared component DataListComponent, which takes an array of data and displays them in the view:
export interface DataTableRow { dataType: string; value: any; } @Component({ selector: 'data-list', template: ` <div *ngFor="let row of data" [ngSwitch]="row.dataType"> <div *ngSwitchCase="'string'">{{row.value}}</div> <div *ngSwitchCase="'number'"># {{row.value | number}}</div> <div *ngSwitchCase="'date'">{{row.value | date}}</div> </div> ` }) export class DataListComponent { @Input() data: DataTableRow[] = []; }
It understands only three types of data now, which are string, number and date. When we want to add more types to it, the easiest way is to simply add more switch cases. It is totally fine when such new types are generic enough that have universal representations. Yet, for data that is depending on the users, adding more switch cases can make the code very dirty.
Say we want to add the new type boolean which displays true/false in FirstComponent, yes/no in SecondComponent. If we simply go for the more-switch-cases solution, it may have something like this:
<div *ngSwitchCase="'boolean-firstComponent'"> {{ row.value ? 'true' : 'false }} </div> <div *ngSwitchCase="'boolean-secondComponent'"> {{ row.value ? 'yes' : 'no}} </div>
This approach is bad as the shared component now contains component-specific logic. Besides, this block of code is going to expand really fast when there are more new use cases in the future, which will soon become a disaster. Ideally, we want to pass HTML segments from the parents, so that we can keep those specific logic away from the shared component.
@Component({ template: ` <data-list [data]="data"> <!-- component specific logic to display true/false --> </data-list> `, ... }) export class FirstComponent {...} @Component({ template: ` <data-list [data]="data"> <!-- component specific logic to display yes/no --> </data-list> `, ... }) export class SecondComponent {...}
The logic behind is actually very straight forward. First, we need to define templates with context in the user components:
@Component({ template: ` <data-list [data]="data"> <ng-template let-value="value"> {{value ? 'true' : 'false'}} </ng-template> </data-list> `, ... }) export class FirstComponent {...} @Component({ template: ` <data-list [data]="data"> <ng-template let-value="value"> {{value ? 'yes' : 'no'}} </ng-template> </data-list> `, ... }) export class SecondComponent {...}
Next, we add the logic to read and present the template segment inside the shared component:
@Component({ selector: 'data-list', template: ` <div *ngFor="let row of data" [ngSwitch]="row.dataType"> <div *ngSwitchCase="'string'">{{row.value}}</div> <div *ngSwitchCase="'number'"># {{row.value | number}}</div> <div *ngSwitchCase="'date'">{{row.value | date}}</div> <div *ngSwitchCase="'boolean'"> <ng-container *ngTemplateOutlet="rowTemplate; context:{ value: row.value }"></ng-container> </div> </div> ` }) export class DataListComponent { @Input() data: DataTableRow[] = []; @ContentChild(TemplateRef) rowTemplate: TemplateRef<any>; }
Now we have a shared component which is capable to interpret a HTML segment from the outside. Yet, it is still not ideal. What if we have more than one templates?
This one is more tricky. Although TemplateRef is capable of parsing context, it doesn’t have a name or ID that we can rely on to distinguish multiple templates from each other programmatically. As a result, we need to add a wrapper component on top of it when we have more than one templates, so that we can add identifiers.
@Component({ selector: 'custom-row-definition', template: '' }) export class CustomRowDefinitionComponent { @Input() dataType: string; @ContentChild(TemplateRef) rowTemplate: TemplateRef<any>; }
Instead of directly retrieving the TemplateRef in the shared component, we retrieve the wrapper:
@Component({ selector: 'data-list', template: ` <div *ngFor="let row of data" [ngSwitch]="row.dataType"> <div *ngSwitchCase="'string'">String: {{row.value}}</div> <div *ngSwitchCase="'number'"># {{row.value | number}}</div> <div *ngSwitchCase="'date'">{{row.value | date}}</div> <ng-container *ngFor="let def of customRowDefinitions"> <ng-container *ngSwitchCase="def.dataType"> <ng-container *ngTemplateOutlet="def.rowTemplate; context:{ value: row.value }"></ng-container> </ng-container> </ng-container> </div> ` }) export class DataListComponent { @Input() data: DataTableRow[] = []; @ContentChildren(CustomRowDefinitionComponent) customRowDefinitions: QueryList<CustomRowDefinitionComponent>; }
(Having multiple ng-container together with structural directives may cause performance issue potentially, but it is not the main point of this article, so we leave it there for simplicity.)
In this example, we use the dataType property inside the wrapper as identifiers for the templates. As a result, we can now define multiple templates with different dataType.
@Component({ selector: 'app-root', template: ` <data-list [data]="data"> <custom-row-definition dataType="array"> <ng-template let-value="value"> {{value.join(' - ')}} </ng-template> </custom-row-definition> <custom-row-definition dataType="money"> <ng-template let-value="value"> $ {{value | number}} </ng-template> </custom-row-definition> </data-list> ` }) export class AppComponent { data: DataTableRow[] = [ { dataType: 'string', value: 'Row 1' }, { dataType: 'number', value: 500 }, { dataType: 'date', value: new Date() }, { dataType: 'array', value: [1, 2, 3, 4] }, { dataType: 'money', value: 200 } ] }
Some may ask why don’t we just use ng-content with name to project the content from the outside? The major difference is the capability to have context (parameters). ng-content is like a function without parameters, which cannot achieve real mutual communication between templates. It is like a one-way channel to merge some HTML segments from the outside, but no real interaction with the template inside. It won’t be able to achieve use cases like the example above.
For more information and to develop web application using Angular, Hire Angular Developer from us as we give you a high-quality product by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”. To develop web application using Angular, please visit our technology page.
Content Source:
Node is a leader in the asynchronous framework market. The platform now supports a huge portion of startups and businesses that are earning hundreds of millions of dollars in revenue. Thus, it establishes itself as a platform that can sustain a huge load, whilst retaining smooth performance. Node.js was perhaps the biggest revelation of modern server engineering that we saw. By the looks of it, Node isn’t stopping any time soon; it’s the exact opposite. The project continues to push out frequent updates and maintains old releases to support older platforms. A new release secures some loopholes in OpenSSL, but also adds more support for languages like C and C++.
Starting with Node.js is a fairly easy process; the guidelines are outlined and thousands of projects are sitting on GitHub, waiting for you to inspect and analyze their architecture. Node.js works great on all platforms, even on Windows 10, for those who are interested. That makes it a truly great platform to begin learning front-end and back-end development together. Let’s not forget that Node has the most populated package manager of any framework or language known to man. Thus, building a website takes only a couple of minutes, thanks to the modules and libraries that are available through the package manager(NPM). So let’s get started with the topmost packages of nodeJS.
All common programming languages share similar structures in the way things are built. One of the fastest ways to get a programming language to serve your needs is through a framework. Express is the leading Node.js framework for quickly creating and publishing applications and APIs. The framework’s minimal structure allows any Node.js developer to quickly launch a functional application with the use of Express Generator. Express gives you a solid outline to build your apps on top of. Combine it with any of the other packages we will discuss, and you will quickly realize just how amazing this framework truly is.
Node.js is known for being the framework to use for scaling large applications, and infrastructure. Process management should be an essential priority for any Node.js user. PM2 offers both process management for production applications, and a load-balancer to help with any possible performance tweaks. With PM2, your applications stay online indefinitely, giving you the tools to reload apps without having to experience any sort of downtime. Is it a surprise that hundreds of thousands of Node.js users consider this an essential tool to have?
Even more asynchronous action going on here in this Node.js package roundup, this time we have Mocha – a feature-rich JavaScript test framework running on Node.js and the browser, making asynchronous testing simple and fun. Mocha tests run serially, allowing for flexible and accurate reporting, while mapping uncaught exceptions to the correc t test cases. Testing is so important to understand how well the application is performing, where we can locate any particular leaks, and also to know how we can improve these bugs, problems, and irritations that we experience. Testing lets developers to understand better how their code performs, and in turn learn more skills as they continue down their chosen path.
A modern JavaScript utility library delivering modularity, performance & extras. Lodash makes JavaScript easier by taking the hassle out of working with arrays, numbers, objects, strings, etc. Lodash’s modular methods are great for:
ESLint is a static code analysis tool for identifying problematic patterns found in JavaScript code. Rules in ESLint are configurable, and customized rules can be defined and loaded. ESLint covers both code quality and coding style issues. Inshort the goal is to make code more consistent and avoiding bugs. In many ways, it is similar to JSLint and JSHint with a few exceptions:
Passport is a unique authentication module for Node.js devs. The main goal of Passport is to help with authentication requests, this Passport achieves through the use of third-party plugins that act as authentication methods, otherwise known as strategies. The Passport API is straightforward, you give Passport a request that you need to authenticate, the Passport in turn gives you the hooks that let you control what happens after an authentication call fails, or succeeds. Exploring the Strategies, there are hundreds of authentication methods to choose from, starting from internal ones, all the way up to external ones like Google, Facebook, and others.
One of the most stable & maintained time manipulation libraries you can find. In the whole collection of libraries, created to solve the issues of formatting, parsing, converting and, generally, working with different forms of time, Moment.js is the one that has seen the widest adoption.
import moment from "moment"; // in relation to release date of this post moment().format("MMMM Do YYYY"); // June 6th 2019 moment("20111031", "YYYYMMDD").fromNow(); // 8 years ago moment().subtract(10, "days").calendar(); // 05/27/2019
With its latest v2 release, Moment.js got rewritten, to support latest ES6 syntax. This brings improved modularity and better performance for always-green browsers. Such things are important, especially when dealing with a library as big as Moment.js.
With Chalk, we’re entering the world of terminal-related tools and libraries, where a number of downloads, and thus popularity, go crazy! Chalk is an extremely simple library, created for one, simple purpose – styling your terminal strings! Just like with Require – it proves that the most useful things are also the simplest ones.
import chalk from "chalk"; // string concatenation - template literals console.log(`${chalk.blue("Hello")} World${chalk.red("!")}`); // chainable API console.log(chalk.blue.bgRed.bold("Hello world!"));
Of course, the API is simple, intuitive (chainable) and it works really well with all features that JS has to offer natively. The official page of the packages states that it’s used by more than 20K different packages! Maybe that’s where the weekly downloads count comes from (~25M). Even though, such numbers cannot be ignored.
Socket.io is just happened to get early access to the Nodejs package that allows you to build a truly real-time communication application that would require real-time streams of data content, either directly from the data that you are working with, or through an Application programming interface(API) that comes from another source. Some example apps like Twitter, they deployed a bot for collecting the latest tweets and on Facebook, bot for watching the news, thus with the combinations of APIs to explore some interesting things that work with data in real-time.
Request is designed to be the simplest way possible to make http calls. It supports HTTPS and follows redirects by default. You can also stream a file to a PUT or POST request. Although it’s deprecated still the most useful package for network calls. few usages are listed below,
For more information and to develop web application using Node JS, Hire Node Developer from us as we give you a high-quality product by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”. To develop custom web apps using Node JS, please visit our Hire Node Developer technology page.
Content Source:
The fifth major webpack release was released recently. Almost two years since the last major release (4), this release brings a lot of changes to the most used module bundler in the JavaScript ecosystem. If like me, you started your front-end career prior to the rise of webpack, you remember the pain and frustration of working with tools like gulp and grunt.
Let’s take a look at the breaking changes and the improvements that come with the new release of this incredibly popular library.
This new version concentrates on five key areas.
Slow builds are one of the most common complaints from developers about webpack. The module bundler now offers an opt-in filesystem cache. This should improve our productivity as developers by speeding up our development builds.
Improvements have been made to tree shaking (also known as dead code elimination). While previous versions of webpack were able to remove unused code, version 5 takes it even further. webpack is now able to remove code inside of modules, leading to even smaller bundle sizes. To read more about all of the optimization features of webpack 5, check out the official documentation.
After bundle size, the thing that can improve your app loading time the most is caching. With caching, returning visitors to your application experience an almost instantaneous loading experience. With webpack 5, changes made to your code that don’t change the minimized version (eg, comments or variable names), do not result in cache invalidation. This means that your users will be able to experience the performance improvements of caching for longer.
Some of the changes introduced in this version will not have any visible impact on your application’s performance today. Instead, they are meant to allow for new features and improvements in later versions of webpack 5.
These future features include using http(s) imports as module externals. This will help with the development of micro frontends. To read more about these new and exciting features, check out the official documentation here.
Another breaking change is bumping the minimum Node.JS version from 6 to 10.13.0. Dropping support for older Node.JS versions will allow the team to simplify their code, and remove workaround to support these older versions.
webpack 5 also brings a new experiments configuration option with support for WebAssembly, Async Web Assembly, Top Level Await, and outputting your bundle as a module (previously only possible with rollup).
This new feature, in short, allows multiple webpack builds to work together. It allows your application to dynamically load code from another application (aka, a different webpack build). The most popular application of module federation is to enable micro-frontend architecture.
Going over module federation in detail is beyond the scope of this article. If you are interested in learning more, be sure to read the official webpack post here.
Here’s a rundown of the breaking changes othat made it into this version and corresponding migration advice.
All items that were marked as deprecated in version 4 have been removed. If your webpack 4 build prints deprecation warnings, be sure to address those before upgrading.
The plugins IgnorePlugin and BannerPlugin accept different arguments. Read more here.
In previous versions of webpack, polyfills for native NodeJS libraries like crypto were included. These have been removed. Instead, you should use frontend focused libraries, or install the polyfills yourself.
This is a personal question, and it really depends on how you use webpack in your project. Most developers who use webpack, use a lot of plugins. You need to make sure that the plugins you use, support this new version.
If you are using NextJS, you can upgrade to webpack 5 by setting the version as a yarn resolution in your package.json. But again, if you have a custom webpack config, you will need to ensure that your config works with webpack 5.
The big advantage (and disadvantage for some) of create-react-app, is that there is no official way to customize your webpack config. For those of you using CRA, you will need to wait until react-scripts is upgraded to support webpack 5. According to a contributor, this should happen in Create-React-App version 4.1 (source).
For more information about migrating from version 4 to 5, be sure to check out the official migration guide.
This new release of webpack makes me even more excited for the future of frontend development. It’s so refreshing to see new features and improvements to a tool that We use every day. We should see it’s improvements driving innovation in the community for the next few years.
Most frontend developers don’t end up touching webpack very much, and just assume that it “just works”. We’ve said it before, and we’ll say it again: We think this is a mistake. Understanding how your build tools work makes you a stronger developer and is invaluable in debugging errors.
While the webpack team will continue to support version 4, by fixing bugs and adding features, for the foreseeable future, they suggest that you upgrade to version 5. With almost any library (except perhaps React), there comes a time when you need to make breaking changes and architectural improvements that rely on those breaking changes.
In short – while dealing with breaking changes is annoying, We don’t think it’s too much of an ask to make some changes to your application’s configuration every two years in exchange for a better and faster build system.
For more information and to develop web application using Webpack 5, Hire Front-End Developer from us as we give you a high-quality product by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”. To develop custom web apps using Webpack 5, please visit our technology page.
Content Source:
Sanctum is Laravel’s lightweight API authentication package. This tutorial will go over using Laravel Sanctum to authenticate a mobile app. The app will be built in Flutter, Google’s cross-platform app development toolkit. We may skip some implementation details of the mobile app since that is not the focus of this tutorial.
We’ve set up Homestead to provision a domain name, api.sanctum-mobile.test, where my backend will be served, as well as a MySQL database.
First, create the Laravel app:
laravel new sanctum_mobile
At the time of writing, it gives me a new Laravel project (v8.6.0). As with the SPA tutorial, the API will provide a list of books, so We’ll create the same resources:
php artisan make:model Book -mr
The mr flags create the migration and controller too. Before we mess with the migrations, let’s first install the Sanctum package, since we’ll need its migrations again.
composer require laravel/sanctum php artisan vendor:publish --provider="Laravel\Sanctum\SanctumServiceProvider"
Now, create the books migration:
Schema::create('books', function (Blueprint $table) { $table->id(); $table->string('title'); $table->string('author'); $table->timestamps(); });
Next, run your app’s migrations:
php artisan migrate
If you now take a look in the database, you’ll see the Sanctum migration has created a personal_access_tokens table, which we’ll use later when authenticating the mobile app.
Let’s update DatabaseSeeder.php to give us some books (and a user for later):
Book::truncate(); $faker = \Faker\Factory::create(); for ($i = 0; $i < 50; $i++) { Book::create([ 'title' => $faker->sentence, 'author' => $faker->name, ]); } User::truncate(); User::create([ 'name' => 'Alex', 'email' => 'alex@alex.com', 'password' => Hash::make('pwdpwd'), ]);
Now seed the database: php artisan db:seed. Finally, create the route and the controller action. Add this to the routes/api.php file:
Route::get('book', [BookController::class, 'index']);
and then in the index method of BookController, return all the books:
return response()->json(Book::all());
After checking that the endpoint works — curl https://api.sanctum-mobile.test/api/book — it’s time to start the mobile app.
For the mobile app, we’ll be using Android Studio and Flutter. Flutter allows you to create cross-platform apps that re-use the same code for Android and iPhone devices. First, follow the instructions to install Flutter and to set up Android Studio, then launch Android Studio and click “Create a new Flutter project.”
Follow the recipe in Flutter’s cookbook to fetch data from the internet to create a page that fetches a list of books from the API. A quick and easy way to expose our API to the Android Studio device is to use Homestead’s share command:
share api.sanctum-mobile.test
The console will output an ngrok page, which will give you a URL (something like https://0c9775bd.ngrok.io) exposing your local server to the public. (An alternative to ngrok is Beyond Code’s Expose.) So let’s create a utils/constants.dart file to put that in:
const API_URL = 'http://191b43391926.ngrok.io';
Now, back to the Flutter cookbook. Create a file books.dart which will contain the classes required for our book list. First, a Book class to hold the data from the API request:
class Book { final int id; final String title; final String author; Book({this.id, this.title, this.author}); factory Book.fromJson(Map<String, dynamic> json) { return Book( id: json['id'], title: json['title'], author: json['author'], ); } }
Second, a BookList class to fetch the books and call the builder to display them:
class BookList extends StatefulWidget { @override _BookListState createState() => _BookListState(); } class _BookListState extends State<BookList> { Future<List<Book>> futureBooks; @override void initState() { super.initState(); futureBooks = fetchBooks(); } Future<List<Book>> fetchBooks() async { List<Book> books = new List<Book>(); final response = await http.get('$API_URL/api/book'); if (response.statusCode == 200) { List<dynamic> data = json.decode(response.body); for (int i = 0; i < data.length; i++) { books.add(Book.fromJson(data[i])); } return books; } else { throw Exception('Problem loading books'); } } @override Widget build(BuildContext context) { return Column( children: <Widget>[ BookListBuilder(futureBooks: futureBooks), ], ); } }
And last, a BookListBuilder to display the books:
class BookListBuilder extends StatelessWidget { const BookListBuilder({ Key key, @required this.futureBooks, }) : super(key: key); final Future<List<Book>> futureBooks; @override Widget build(BuildContext context) { return FutureBuilder<List<Book>>( future: futureBooks, builder: (context, snapshot) { if (snapshot.hasData) { return Expanded(child: ListView.builder( itemCount: snapshot.data.length, itemBuilder: (context, index) { Book book = snapshot.data[index]; return ListTile( title: Text('${book.title}'), subtitle: Text('${book.author}'), ); }, )); } else if (snapshot.hasError) { return Text("${snapshot.error}"); } return CircularProgressIndicator(); } ); } }
Now we just need to modify the MyApp class in main.dart to load the BookList:
class MyApp extends StatelessWidget { @override Widget build(BuildContext context) { return MaterialApp( title: 'Sanctum Books', home: new Scaffold( body: BookList(), ) ); } }
Now launch this in your test device or the emulator, and you should see a list of books.
Great, so we know the API is working and that we can fetch books from it. The next step is to set up authentication.
We’re going to use the provider package, and follow the guidelines in the official documentation for setting up simple state management. We want to create an authentication provider that keeps track of the logged-in status and eventually communicates with the server. Create a new file, auth.dart. Here’s where the authentication functionality will go. For the moment, we’ll return true so we can test the process works:
class AuthProvider extends ChangeNotifier { bool _isAuthenticated = false; bool get isAuthenticated => _isAuthenticated; Future<bool> login(String email, String password) async { print('logging in with email $email and password $password'); _isAuthenticated = true; notifyListeners(); return true; } }
With this provider, we can now check whether we’re authenticated and display the correct page accordingly. Modify you main function to include the provider:
void main() { runApp( ChangeNotifierProvider( create: (BuildContext context) => AuthProvider(), child: MyApp(), ) ); }
… and modify the MyApp class to show the BookList widget if we’re logged-in, or a LoginForm widget otherwise:
body: Center( child: Consumer<AuthProvider>( builder: (context, auth, child) { switch (auth.isAuthenticated) { case true: return BookList(); default: return LoginForm(); } }, ) ),
The LoginForm classes contain a lot of “widgety” cruft, so We’ll refer you to the GitHub repo if you’re interested in looking at it. Anyway, if you load the app in your test device, you should see a login form. Fill in a random email and password, submit the form, and you’ll see a list of books.
Ok, let’s set up the backend to handle the authentication. The docs tell us to create a route that will accept the username and password, as well as a device name, and return a token. So let’s create a route in the api.php file:
Route::post('token', [AuthController::class, 'requestToken']);
and a controller: php artisan make:controller AuthController. This will contain the code from the docs:
public function requestToken(Request $request): string { $request->validate([ 'email' => 'required|email', 'password' => 'required', 'device_name' => 'required', ]); $user = User::where('email', $request->email)->first(); if (! $user || ! Hash::check($request->password, $user->password)) { throw ValidationException::withMessages([ 'email' => ['The provided credentials are incorrect.'], ]); } return $user->createToken($request->device_name)->plainTextToken; }
Providing the username and password are valid, this will create a token, save it in the database, and return it to the client. To get this to work, we need to add the HasApiTokens trait to our User model. This gives us a tokens relationship, allowing us to create and fetch tokens for the user, and a createToken method. The token itself is a sha256 hash of a 40-character random string: this string (unhashed) is returned to the client, which should save it to use with any future requests to the API. More precisely, the string returned to the client is composed of the token’s id, followed by a pipe character (|), followed by the plain text (unhashed) token.
So now we have this endpoint in place, let’s update the app to use it. The login method will now have to post the email, password, and device_name to this endpoint, and if it gets a 200 response, save the token in the device’s storage. For device_name, We’re using the device_info package to get the device’s unique ID, but in fact, this string is arbitrary.
final response = await http.post('$API_URL/token', body: { 'email': email, 'password': password, 'device_name': await getDeviceId(), }, headers: { 'Accept': 'application/json', }); if (response.statusCode == 200) { String token = response.body; await saveToken(token); _isAuthenticated = true; notifyListeners(); }
We use the shared_preferences package, which allows for the storage of simple key-value pairs, to save the token:
saveToken(String token) async { final prefs = await SharedPreferences.getInstance(); await prefs.setString('token', token); }
So now we’ve got the app displaying the books page after a successful login. But of course, as things stand, the books are accessible with or without successful login. Try it out: curl https://api.sanctum-mobile.test/api/book. So now let’s protect the route:
Route:::middleware('auth:sanctum')->get('book', [BookController::class, 'index']);
Login again via the app, and this time you’ll get an error: “Problem loading books”. You are successfully authenticating, but because we don’t as yet send the API token with our request to fetch the books, the API is quite rightly not sending them. As in the previous tutorial, let’s look at the Sanctum guard to see what it’s doing here:
if ($token = $request->bearerToken()) { $model = Sanctum::$personalAccessTokenModel; $accessToken = $model::findToken($token); if (! $accessToken || ($this->expiration && $accessToken->created_at->lte(now()->subMinutes($this->expiration))) || ! $this->hasValidProvider($accessToken->tokenable)) { return; } return $this->supportsTokens($accessToken->tokenable) ? $accessToken->tokenable->withAccessToken( tap($accessToken->forceFill(['last_used_at' => now()]))->save() ) : null; }
The first condition is skipped since we aren’t using the web guard. Which leaves us with the above code. First, it only runs if the request has a “Bearer” token, i.e. if it contains an Authorization header which starts with the string “Bearer”. If it does, it will call the findToken method on the PersonalAccessToken model:
if (strpos($token, '|') === false) { return static::where('token', hash('sha256', $token))->first(); } [$id, $token] = explode('|', $token, 2); if ($instance = static::find($id)) { return hash_equals($instance->token, hash('sha256', $token)) ? $instance : null; }
The first conditional checks to see whether the pipe character is in the token and, if not, to return the first model that matches the token. We assume this is to preserve backward compatibility with versions of Sanctum before 2.3, which did not include the pipe character in the plain text token when returning it to the user. (Here is the pull request: the reason was to make the token lookup query more performant.) Anyway, assuming the pipe character is there, Sanctum grabs the model’s ID and the token itself, and checks to see if the hash matches with what is stored in the database. If it does, the model is returned.
Back in Guard: if no token is returned, or if we’re considering expiring tokens (which we’re not in this case), return null (in which case authentication fails). Finally:
return $this->supportsTokens($accessToken->tokenable) ? $accessToken->tokenable->withAccessToken( tap($accessToken->forceFill(['last_used_at' => now()]))->save() ) : null;
Check that the tokenable model (i.e., the User model) supports tokens (in other words, that it uses the HasApiTokens trait). If not, return null – authentication fails. If so, then return this:
$accessToken->tokenable->withAccessToken( tap($accessToken->forceFill(['last_used_at' => now()]))->save() )
The above example uses the single-argument version of the tap helper. This can be used to force an Eloquent method (in this case, save) to return the model itself. Here the access token model’s last_used_at timestamp is updated. The saved model is then passed as an argument to the User model’s withAccessToken method (which it gets from the HasApiTokens trait). This is a compact way of updating the token’s last_used_at timestamp and returning its associated User model. Which means authentication has been successful.
So, back to the app. With this authentication in place, we need to update the app’s call to the book endpoint to pass the token in the request’s Authorization header. To do this, update the fetchBooks method to grab the token from the Auth provider, then add it to the header:
String token = await Provider.of<AuthProvider>(context, listen: false).getToken(); final response = await http.get('$API_URL/book', headers: { 'Authorization': 'Bearer $token', });
Don’t forget to add a getToken method to the AuthProvider class:
Future<String> getToken() async { final prefs = await SharedPreferences.getInstance(); return prefs.getString('token'); }
Now try logging in again, and this time the books should be displayed.
For more information and to develop web application using Laravel, Hire Laravel Developer from us as we give you a high-quality product by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”. To develop custom web apps using React JS, please visit our technology page.
Content Source:
Browsers don’t understand JSX out of the box, so most React users rely on a compiler like Babel or TypeScript to transform JSX code into regular JavaScript. Many preconfigured toolkits like Create React App or Next.js also include a JSX transform under the hood.
Together with the React 17 release, we’ve wanted to make a few improvements to the JSX transform, but we didn’t want to break existing setups. This is why we worked with Babel to offer a new, rewritten version of the JSX transform for people who would like to upgrade.
Upgrading to the new transform is completely optional, but it has a few benefits:
This upgrade will not change the JSX syntax and is not required. The old JSX transform will keep working as usual, and there are no plans to remove the support for it.
React 17 RC already includes support for the new transform, so go give it a try! To make it easier to adopt, after React 17 is released, we also plan to backport its support to React 16.x, React 15.x, and React 0.14.x. You can find the upgrade instructions for different tools below.
Now let’s take a closer look at the differences between the old and the new transform.
When you use JSX, the compiler transforms it into React function calls that the browser can understand. The old JSX transform turned JSX into React.createElement(…) calls.
For example, let’s say your source code looks like this:
import React from 'react'; function App() { return <h1>Hello World</h1>; }
Under the hood, the old JSX transform turns it into regular JavaScript:
import React from 'react'; function App() { return React.createElement('h1', null, 'Hello world'); }
Note Your source code doesn't need to change in any way. We're describing how the JSX transform turns your JSX source code into the JavaScript code a browser can understand.
However, this is not perfect:
To solve these issues, React 17 introduces two new entry points to the React package that are intended to only be used by compilers like Babel and TypeScript. Instead of transforming JSX to React.createElement, the new JSX transform automatically imports special functions from those new entry points in the React package and calls them.
Let’s say that your source code looks like this:
function App() { return <h1>Hello World</h1>; }
This is what the new JSX transform compiles it to:
// Inserted by a compiler (don't import it yourself!) import {jsx as _jsx} from 'react/jsx-runtime'; function App() { return _jsx('h1', { children: 'Hello world' }); }
Note how our original code did not need to import React to use JSX anymore! (But we would still need to import React in order to use Hooks or other exports that React provides.)
This change is fully compatible with all of the existing JSX code, so you won’t have to change your components. If you’re curious, you can check out the technical RFC for more details about how the new transform works.
Note The functions inside react/jsx-runtime and react/jsx-dev-runtime must only be used by the compiler transform. If you need to manually create elements in your code, you should keep using React.createElement. It will continue to work and is not going away.
If you aren’t ready to upgrade to the new JSX transform or if you are using JSX for another library, don’t worry. The old transform will not be removed and will continue to be supported.
If you want to upgrade, you will need two things:
Since the new JSX transform doesn’t require React to be in scope, we’ve also prepared an automated script that will remove the unnecessary imports from your codebase.
Create React App support has been added and will be available in the upcoming v4.0 release which is currently in beta testing.
Next.js v9.5.3+ uses the new transform for compatible React versions.
Gatsby v2.24.5+ uses the new transform for compatible React versions.
Note If you get this Gatsby error after upgrading to React 17.0.0-rc.2, run npm update to fix it.
Support for the new JSX transform is available in Babel v7.9.0 and above.
First, you’ll need to update to the latest Babel and plugin transform.
If you are using @babel/plugin-transform-react-jsx:
# for npm users npm update @babel/core @babel/plugin-transform-react-jsx
# for yarn users yarn upgrade @babel/core @babel/plugin-transform-react-jsx
If you are using @babel/preset-react:
# for npm users npm update @babel/core @babel/preset-react
# for yarn users yarn upgrade @babel/core @babel/preset-react
Currently, the old transform (“runtime”: “classic”) is the default option. To enable the new transform, you can pass {“runtime”: “automatic”} as an option to @babel/plugin-transform-react-jsx or @babel/preset-react:
// If you are using @babel/preset-react { "presets": [ ["@babel/preset-react", { "runtime": "automatic" }] ] }
// If you're using @babel/plugin-transform-react-jsx { "plugins": [ ["@babel/plugin-transform-react-jsx", { "runtime": "automatic" }] ] }
Starting from Babel 8, “automatic” will be the default runtime for both plugins. For more information, check out the Babel documentation for @babel/plugin-transform-react-jsx and @babel/preset-react.
Note If you use JSX with a library other than React, you can use the importSource option to import from that library instead - as long as it provides the necessary entry points. Alternatively, you can keep using the classic transform which will continue to be supported.
If you are using eslint-plugin-react, the react/jsx-uses-react and react/react-in-jsx-scope rules are no longer necessary and can be turned off or removed.
{ // ... "rules": { // ... "react/jsx-uses-react": "off", "react/react-in-jsx-scope": "off" } }
TypeScript supports the JSX transform in v4.1 beta.
Flow supports the new JSX transform in v0.126.0 and up.
Because the new JSX transform will automatically import the necessary react/jsx-runtime functions, React will no longer need to be in scope when you use JSX. This might lead to unused React imports in your code. It doesn’t hurt to keep them, but if you’d like to remove them, we recommend running a “codemod” script to remove them automatically:
cd your_project npx react-codemod update-react-imports
Note If you're getting errors when running the codemod, try specifying a different JavaScript dialect when npx react-codemod update-react-imports asks you to choose one. In particular, at this moment the "JavaScript with Flow" setting supports newer syntax than the "JavaScript" setting even if you don't use Flow. File an issue if you run into problems. Keep in mind that the codemod output will not always match your project's coding style, so you might want to run Prettier after the codemod finishes for consistent formatting.
Running this codemod will:
For example,
import React from 'react'; function App() { return <h1>Hello World</h1>; }
will be replaced with
function App() { return <h1>Hello World</h1>; }
If you use some other import from React – for example, a Hook – then the codemod will convert it to a named import.
For example,
import React from 'react'; function App() { const [text, setText] = React.useState('Hello World'); return <h1>{text}</h1>; }
will be replaced with
import { useState } from 'react'; function App() { const [text, setText] = useState('Hello World'); return <h1>{text}</h1>; }
In addition to cleaning up unused imports, this will also help you prepare for a future major version of React (not React 17) which will support ES Modules and not have a default export.
For more information and to develop web application using React JS, Hire React Developer from us as we give you a high-quality product by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”. To develop custom web apps using React JS, please visit our technology page.
Content Source:
Laravel 8 is now released and includes many new features including Laravel Jetstream, a models directory, model factory classes, migration squashing, rate-limiting improvements, time testing helpers, dynamic blade components, and many more features.
Before we jump into the new features, we’d like to point out that starting with version 6, Laravel now follows semver and will release a new major version every six months. You can see how the release process works here.
Laravel Jetstream improves upon the existing Laravel UI scaffolding found in previous versions. It provides a starting point for new projects, including login, registration, email verification, two-factor authentication, session management, API support via Laravel, and team management.
Laravel 8’s application skeleton includes an app/Models directory. All generator commands assume models exist in app/Models; however if this directory doesn’t exist, the framework will assume the application keeps models within the app/ folder.
Eloquent model factories are now class-based starting in Laravel 8, with improved support for relationships between factories (i.e., a user has many posts). I think you’ll agree how awesome the new syntax is for generating records via the new and improved model factories:
use App\Models\User; User::factory()->count(50)->create(); // using a model state "suspended" defined within the factory class User::factory()->count(5)->suspended()->create();
If your application contains many migration files, you can now squash them into a single SQL file. This file will be executed first when running migrations, followed by any remaining migration files that are not part of the squashed schema file. Squashing existing migrations can decrease migration file bloat and possibly improve performance while running tests.
Laravel 8 brings improvements to existing rate limiting functionality while supporting backward compatibility with the existing throttle middleware and offering far more flexibility. Laravel 8 has the concept of Rate Limiters that you can define via a facade:
use Illuminate\Cache\RateLimiting\Limit; use Illuminate\Support\Facades\RateLimiter; RateLimiter::for('global', function (Request $request) { return Limit::perMinute(1000); });
As you can see, the for() method takes the HTTP request instance, giving you full control over limiting requests dynamically.
Laravel users have enjoyed full control over time modification via the excellent Carbon PHP library. Laravel 8 brings this one step further by providing convenient test helpers for manipulating the time within tests:
// Travel into the future... $this->travel(5)->milliseconds(); $this->travel(5)->seconds(); $this->travel(5)->minutes(); $this->travel(5)->hours(); $this->travel(5)->days(); $this->travel(5)->weeks(); $this->travel(5)->years(); // Travel into the past... $this->travel(-5)->hours(); // Travel to an exact time... $this->travelTo(now()->subHours(6)); // Return back to the present time... $this->travelBack();
When using these methods, the time will reset between each test.
Sometimes you need to render a blade component dynamically at runtime. Laravel 8 provides the ‘<x-dynamic-component/>’ to render the component:
<x-dynamic-component :component="$componentName" class="mt-4" />
Laravel’s job batching feature allows you to easily execute a batch of jobs and then perform some action when the batch of jobs has completed executing.
The new batch method of the Bus facade may be used to dispatch a batch of jobs. Of course, batching is primarily useful when combined with completion callbacks. So, you may use the then, catch, and finally methods to define completion callbacks for the batch. Each of these callbacks will receive an Illuminate\Bus\Batch instance when they are invoked:
use App\Jobs\ProcessPodcast; use App\Podcast; use Illuminate\Bus\Batch; use Illuminate\Support\Facades\Bus; use Throwable; $batch = Bus::batch([ new ProcessPodcast(Podcast::find(1)), new ProcessPodcast(Podcast::find(2)), new ProcessPodcast(Podcast::find(3)), new ProcessPodcast(Podcast::find(4)), new ProcessPodcast(Podcast::find(5)), ])->then(function (Batch $batch) { // All jobs completed successfully... })->catch(function (Batch $batch, Throwable $e) { // First batch job failure detected... })->finally(function (Batch $batch) { // The batch has finished executing... })->dispatch(); return $batch->id;
In previous releases of Laravel, the php artisan down maintenance mode feature may be bypassed using an “allow list” of IP addresses that were allowed to access the application. This feature has been removed in favor of a simpler “secret” / token solution.
While in maintenance mode, you may use the secret option to specify a maintenance mode bypass token:
php artisan down --secret="1630542a-246b-4b66-afa1-dd72a4c43515"
After placing the application in maintenance mode, you may navigate to the application URL matching this token and Laravel will issue a maintenance mode bypass cookie to your browser:
https://example.com/1630542a-246b-4b66-afa1-dd72a4c43515
When accessing this hidden route, you will then be redirected to the / route of the application. Once the cookie has been issued to your browser, you will be able to browse the application normally as if it was not in maintenance mode.
If you utilize the php artisan down command during deployment, your users may still occasionally encounter errors if they access the application while your Composer dependencies or other infrastructure components are updating. This occurs because a significant part of the Laravel framework must boot in order to determine your application is in maintenance mode and render the maintenance mode view using the templating engine.
For this reason, Laravel now allows you to pre-render a maintenance mode view that will be returned at the very beginning of the request cycle. This view is rendered before any of your application’s dependencies have loaded. You may pre-render a template of your choice using the down command’s render option:
php artisan down --render="errors::503"
For more information and to develop web application using Laravel, Hire Laravel Developer from us as we give you a high-quality product by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”. To develop custom web apps using React JS, please visit our technology page.
Content Source:
TypeScript 4.0 is a major milestone in the TypeScript programming language and has currently leapfrogged 3.9 to become the latest stable version. In this post, we’ll look at the new features TypeScript 4.0 offers.
To get started using 4.0, you can install it through NuGet or via NPM:
npm i typescript
You can test the code using the TypeScript playground or a text editor that supports TypeScript. I recommend using Visual Studio Code, you can get set up instructions here.
In a nutshell, we can say TypeScript is strongly typed JavaScript. This means that it requires developers to accurately specify the format of their data types, consequently, it allows the compiler to catch type errors at compile time and therefore, give a better developer experience.
This process of accurately specifying the format of data types is known as type declaration or type definitions — it is also called typings or simple types.
With this feature, TypeScript gives types to higher-order functions such as curry, concat, and apply. These are functions that take a variable number of parameters.
Consider a small contrived example of the concat function below:
function simpleConcat(arr1, arr2) { return [...arr1, ...arr2]; } console.log(simpleConcat([1,2,3], [5,6])) // [1, 2, 3, 5, 6]
There is currently no easy way to type this in TypeScript. The only typing strategy available currently is to write overloads.
Function or method overloading refers to a feature in TypeScript that allows us to create multiple functions having the same name but a different number of parameters or types.
Consider this:
function concat1<T>(arr1: [T], arr2: []): [T] { return [...arr1, ...arr2] } function concat2<T1, T2>(arr1: [T1, T2], arr2: []): [T1, T2] { return [...arr1, ...arr2] }; function concat6<T1, T2, T3, T4, T5, T6>(arr1: [T1, T2, T3, T4, T5, T6], arr2: []): [T1, T2, T3, T4, T5, T6] { return [...arr1, ...arr2] } function concat7<T1, T2, T3, T4, T5, T6, A1, A2, A3, A4>(arr1: [T1, T2, T3, T4, T5, T6], arr2: [A1, A2, A3, A4]): [T1, T2, T3, T4, T5, T6, A1, A2, A3, A4] { return [...arr1, ...arr2] } console.log("concated 1", concat1([1], [])) console.log("concated 2", concat2([1,2], [])) console.log("concated 6", concat6([1,2,3,4,5,6], [])) console.log("concated 10", concat10([1,2,3,4,5,6], [10, 11, 12, 13]))
From the example above we can see that the number of overloads increases as the number of items in the array increases which is suboptimal. In concat6 we had to write 6 overloads even when the second array is empty and this quickly grew 10 overloads in concat10 when the second array had just 4 items.
Also, we can only get correct types for as many overloads as we write.
TypeScript 4.0 comes with significant inference improvements. It allows spread elements in tuple types to be generic and to occur anywhere in the tuple.
In older versions, REST element must be last in a tuple type. And TypeScript would throw an error if this were not the case:
// Tuple speard items are generic function concatNumbers<T extends Number[]>(arr: readonly [Number, ...T]) { // return something } // spread occuring anywhere in the tuble valid in 4.0 beta. type Name = [string, string]; type ID = [number, number]; type DevTuples = [...Name, ...Numbers]
Given these two additions, we can write a better function signature for our concat function:
type Arr = readonly any[]; function typedConcat<T extends Arr, U extends Arr>(arr1: T, arr2: U): [...T, ...U] { return [...arr1, ...arr2]; } console.log("concated", typedConcat([1,2,3,4,5], [66,77,88,99]))
This is a pithy addition to TypeScript aimed at improving code readability.
Consider the code below:
type Period = [Date, Date]; // Example 1 older version type Period = [StartDate: Date, EndDate: Date]; // Example 2 4.0 beta function getAge(): [birthDay: Date, today: Date] { // ... }
Previously, TypeScript developers use comments to describe tuples because the types themselves (date, number, string) don’t adequately describe what the elements represent.
From our small contrived example above, “example 2” is way more readable because of the labels added to the tuples.
When labelling tuples all the items in the tuples must be labelled.
Consider the code below:
type Period = [startDate: Date, Date]; // incorrect type Period = [StartDate: Date, EndDate: Date]; // correct
In TypeScript 4.0, we can now use control flow analysis to determine the types of properties in classes when noImplicitAny is enabled. Let’s elaborate on this with some code samples.
Consider the code below:
// Compile with --noImplicitAny class CalArea { Square; // string | number constructor(area: boolean, length: number, breadth: number) { if (!area) { this.Square = "No area available"; } else { this.Square = length * breadth; } } }
Previously, the code above would not compile if noImplicitAny is enabled. This is because property types are only inferred from direct initializations, so their types must either be defined explicitly or using an initial initializer.
However, TypeScript 4.0 can use control flow analysis of this.Square assignments in constructors to determine the types of Square.
Currently, in JavaScript, a lot of binary operators can be combined with the assignment operator to form a compound assignment operator. These operators perform the operation of the binary operator on both operands and assigned the value to the left operand:
// compound operators foo += bar // foo = foo + bar foo -= bar // foo = foo - bar foo *= bar // foo = foo * bar foo /= bar // foo = foo/bar foo %= bar // foo = foo % bar
The list goes on but with three exceptions:
|| // logical or operator && // logical and operator ?? // nullish coalescing operator
TypeScript 4.0 beta would allow us to combine these three with the assignment operator thus forming three new compound operators:
x ||= y // x || (x = y) x &&= y // x && (x = y) x ??= y // x ?? (x = y )
Previously, when we use the try … catch statement in TypeScript, the catch clause is always typed as any, consequently, our error-handling code lacks any type-safety which should prevent invalid operations. I will elaborate with some code samples below:
try { // ... }catch(error) { error.message error.toUpperCase() error.toFixed() // ... }
From the code above we can see that we are allowed to do anything we want — which is really what we don’t want.
TypeScript 4.0 aims to resolve this by allowing us to set the type of the catch variable as unknown. This is safer because it’s meant to remind us to do a manual type checking in our code:
try { // ... }catch(error: unknown) { if(typeof error === "String") { error.toUpperCase() } if(typeof error === "number") { error.toFixed() } // ... }
TypeScript already supports jsxFactory compiler option, this feature, however, adds a new compiler option known as jsxFragmentFactory which enables users to customize the React.js fragment factory in the tsconfig.json:
{ "compilerOptions": { "target": "esnext", "module": "commonjs", "jsx": "react", // React jsx compiler option "jsxFactory": "createElement", // transforms jsx using createElement "jsxFragmentFactory": "Fragment" // transforms jsx using Fragment } }
The above tsconfig.json configuration transforms JSX in a way that is compatible with React thus a JSX snippet such as <article/> would be transformed with createElement instead of React.createElement. Also, it tells TypeScript to use Fragment instead of React.Fragment for JSX transformation.
TypeScript 4.0 also features great performance improvements in –build mode scenarios and also allows us to use the –noEmit flag while still leveraging –incremental compiles. This was possible in older versions.
In addition, there are several editor improvements such as @deprecated JSDoc annotations recognition, smarter auto-imports, partial editing mode at startup (which aimed to speed up startup time).
For more information and to develop web application using TypeScript, Hire TypeScript Developer from us as we give you a high-quality product by utilizing all the latest tools and advanced technology. E-mail us any clock at – hello@hkinfosoft.com or Skype us: “hkinfosoft”. To develop custom web apps using TypeScript, please visit our technology page.
Content Source:
57 Sherway St,
Stoney Creek, ON
L8J 0J3
606, Suvas Scala,
S P Ring Road, Nikol,
Ahmedabad 380049
1131 Baycrest Drive,
Wesley Chapel,
FL 33544
57 Sherway St,
Stoney Creek, ON
L8J 0J3
606, Suvas Scala,
S P Ring Road, Nikol,
Ahmedabad 380049
1131 Baycrest Drive,
Wesley Chapel,
FL 33544
© 2025 — HK Infosoft. All Rights Reserved.
© 2025 — HK Infosoft. All Rights Reserved.
T&C | Privacy Policy | Sitemap