Category

Coding

How to debug Jasmine-es6 in visual studio code

This article is going to show the configuration required to debug Jasmine-ES6 in Visual studio code.
Jasmine-ES6 is a Helpers and overrides that augment Jasmine for use in an ES6+ environment. It is great when you have a project that you do not want to transpile using babel. And it turned out to be one of the NPM package that was used in one of the latest project in which I was involved.
Due to the nature of the plugin, it is not possible to Debug Jasmine-es6 directly in the browser, but it is possible by using the debug feature provided by Visual Studio Code. The settings that are going to be provided below, will actually work to emulate any NPM command that you are currently using.

Create a debug configuration file in Visual Studio Code.

Visual studio code enables use ( sometimes with the use of extension) to debug almost any coding language (js, c#, php,ect..).

To access the Debug page we need to click the “bug” icon on the left hand menu.

Now that we have accessed the debugging page, we are able to add our configuration. To do so, click on the dropdown next to the Green arrow, like shown in the image below.

Visual Studio Code (VSC) will provide you a list of “predefined” debugging configuration that will support you in completing the setup. In our case we are going to use the “launch program” option.
visual studio code available configuration
Our configuration file will look something like this:
Visual studio Code basic debug file
 
  1. {
  2. "version": "0.2.0",
  3. "configurations": [
  4. {
  5. "type": "node",
  6. "request": "launch",
  7. "name": "Launch Program",
  8. "program": "${workspaceRoot}/app.js"
  9. }
  10. ]
  11. }
The configuration can have multiple entry that can be accessed by the dropdown previously used.

Setting the config

The config requires two main information. The first is the Program that we would like to run, this can actually be changed with whatever program you are currently running from the command line. When writing a command you will probably just use the name of the package ( depending how it is installed ), for example “Jasmine init”.

Node will automatically know that you are looking in reality for a package within the node_modules folder called Jasmine. Unfortunately our Debug configuration file is not that clever and will require you to specify the complete path.
You can use ${workspaceFolder} to select the workspace root, and then form the rest of the path required to reach the entry js file of your package. In the case of Jasmine-es6 the path will look something like:
jasmine-es6 path
 
  1. "${workspaceRoot}/node_modules/jasmine-es6/bin/jasmine.js"
Running the above is the equivalent of running the command Jasmine-es6 in the command line. This will work, but in our case we want to be more specific and actually just run a specific spec file.
In a command line scenario I would run the following line:
Jasmine command line
 
jasmine-es6 "/tests/Tests1spec.js"
To add parameter in our configuration we need to use the specify the args array:
Args array
 
  1. "args": [
  2. "${workspaceFolder}\\tests\\Tests1spec.js"
  3. ]
If you use backslash instead than forward slash, you will have to escape them ( as shown above)

Conclusion

The above post is aimed at supporting you and hopefully save you some time. The debugging feature of Visual Studio Code are quite extensive ( I debugger PHP in the past and it worked perfectly).  Not that everything is set up, you can start debugging by clicking the green arrow in the debug page, or just by pressing F5 from your keyboard (make sure to add breakpoint where you would like the add to break).

There may be better method to debug, and most people would have webpack setup to support them in the traspilation and test run, but I wanted to go against current and try something different.

As always I am happy to receive any comment that can support the future readers.

I complete the post wit the complete file below:

Node Program debug in Visual Studio Code
 
  1. {
  2. // Use IntelliSense to learn about possible Node.js debug attributes.
  3. // Hover to view descriptions of existing attributes.
  4. // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
  5. "version": "0.2.0",
  6. "configurations": [
  7. {
  8. "type": "node",
  9. "request": "launch",
  10. "name": "Launch Program",
  11. "program": "${workspaceRoot}/node_modules/Jasmine-es6/bin/jasmine.js",
  12. "args": [
  13. "${workspaceRoon}/tests/Test1spec.js"
  14. }
  15. ]
  16. }

 

How to Create a Database migration with Entity Framework

This article is the continuation of a series started with the first article explaining how to setup SQL server with Entity Framework using .NET CORE.

This post is going to explain how to create a migration script with the use of Entity Framework tools and then we are going to use this migration to create all the tables on our local database.

Entity Framework

Entity Framework is an open-source ORM framework for .NET applications supported by Microsoft. It enables developers to work with data using objects of domain specific classes without focusing on the underlying database tables and columns where this data is stored. With the Entity Framework, developers can work at a higher level of abstraction when they deal with data, and can create and maintain data-oriented applications with less code compared with traditional applications.

The sentence above, perfectly summarise the power of this ORM framework. It supports developers in one of the most tedious tasks.

In this chapter we require three different NuGet packages.

  1. The Database provider
  2. The Entity Framework tools

All the packages below will be installed using the Package Manager Console. To access it, using Visual Studio please go to:

Tools > NuGet Package Manager > Package Manager Console

The database provider

In our example we are going to use SQL (As we have already defined in the previous posts), but there is a list of available Database Provider 

To install our package we will have to type the following command:

Database Provider
 
  1. Install-Package Microsoft.EntityFrameworkCore.SqlServer

Entity Framework Tools

Entity Framework is powerful by itself, but we also have a set of tools to make it even greater.

The tools can be installed by using the following command in Package Manager Console:

EntityFrameoworkCore
 
  1. Install-Package Microsoft.EntityFrameworkCore.Tools

Create a migration script

After all of the above steps, creating a migration script is very straight forward.

Using our previously installed packages we can run the following command in the Package Manager Console

Migration Script
 
  1. Add-Migration InitialMigrationScript

 

During the above step you may receive an error stating:

Migration Error
 
  1. The term 'add-migration' is not recognized as the name of a cmdlet

If this error appear you just have to close and reopen visual studio.

The code above will create a migration script called InitialMigrationScript and place in a folder called Migrations.

The file is actually pretty readable and it is useful to analyse to be able to modify and create one in the future without the help of the assistant if necessary.

Run a migration

This is our final step. After which, our table will be fully set and ready to be used within our app.

To create the tables, we have to run the migration file previously created using one of the package installed in the initial steps.

To run the migration run the following command in the Package Manager Console

Create Tables
 
  1. Update-Database

Conclusion

We are not at the end of this little tutorials. The above steps should have allowed us to create a full migration script that can be used to have a consistent database schemer across environment and to developers plenty of time.

I hope this can be helpful to an of you, as I personally spent some time to be able to put this together and find all the information I needed.

As mentioned above and in my previous post, I am sharing my finding while I use them while I am using this technologies in side projects, and I am more than happy to receive feedback that can help me improve it.

 

How to setup SQL server with Entity Framework using .NET CORE

This article is the first of a series that is aimed to explain how to use familiar technologies with .NET CORE.

The aim of this post is to create a Database from scratch using Entity Framework. We will configure SqlServer to run with our core app and finally touch upon table relationship in Entity Framework.

Disclaimer: This is the first time I use entity Framework and .NET Core, so I just trying to share my “findings” and it is by no mean the “best” solution available. I am more than open to suggestion if I can make the code below faster and cleaner.

Getting Started

To be able to follow along with this article, you will need to have a project set up and ready. If you do not have one, I would suggest you to create a new project (Web Api preferred) by using the newly released Templates available cross platform. To enable this you will need to install the newest core package for visual studio.

In our case we are going to use a newly created Web API build on Core 2.0.

Tables

One of the hardest task to complete, before you can create a full set of tables in a relationship database ( like Sql and MySql) it is to create a full map of the tables. It may seem clean in your head what your app needs, but I greatly suggest you to create a complete map of all the tables, columns and its relationship on paper to make sure that you have thought at everything.

You will be able to change things later on, but database architecture it is very important for the performance of your application.

In our case we are going to create a player table and an inventory table.

The tables will have the following columns

Player

  • (int) PlayerId
  • (string) Name
  • (decimal) health

Inventory

  • (int) InventoryId
  • (int) PlayerId
  • (string) Name

The first step is to create a Model class for each of the above table. It is good practice to have all models together within a folder placed in the Root of our application called Models.

Now that the folder is in place, we can two files, and call them Player.cs and Inventory.cs.

Player.cs
 
namespace My_game
{
    public class Player
    {
        public long PlayerId { get; set; }
        public string Name { get; set; }
        public decimal health { get; set; }
    }
}

The above model is quite simple. If you are familiar with c# at all, you have surely created a file that looked like this in the past. The power Entity Framework will use the above class to create the table and support us in mapping our future database queries.

Now we need to create another file that will include the Inventory class. This is going to be slightly different than the one above, as we this class is expecting to have a “relationship” with the player class, because each player will be able to have many pieces of inventory.

Inventory.cs
 
namespace My_game
{
    public class Inventory
    {
        public long InventoryId { get; set; }
        public decimal Name { get; set; }
        virtual Player Player { get; set; }
    }
}

As shown by the above code, to add a relationship you just need to use a “virtual” entry with the use of the recently created “player” model.

As you may have noticed, we have not specified any unique identified when creating the models, and not any specification on where the relationship of the tables lies.

The magic is in the names. Entity Framework, if not specified, will expect the Unique Identifier to be either called Id, or the class name appended by Id (eg. InventoryId, PlayerId).

Relationship are handled a very similar way, it assume the tables are connected by using its unique identifier, and as with the above case, we are free to change the default ( this will not be covered in this article).

Context

Now that we have a couple of tables in action, it is time to fit them together we need to use System.Data.Entity.DbContext class (often refereed as context). This class is responsible of creating a complete picture of the database.

For this example we are going to create a context file called dbContext.cs. This file will include the models created above and will look like the following snippets:

dbContext.cs
 
namespace My_game.Models
{
    public class dbContext : DbContext
    {
        public dbContext(DbContextOptions<dbContext> options)
            : base(options)
        {
        }
        public DbSet<Player> Player { get; set; }
        public DbSet<Inventory> Inventory { get; set; }
    }
}

The above class is going to be used in the next few chapter with the use of Entity Framework tools to create a migrations script that will eventually create our database tables.

Configure SQL server

Now that all our models and the context have been fully developed, we are ready to configure SQL server. This article is not going to explain how to set up an SQL server and create a Database, and it is out of scope, but plenty or resources can easily be found on this topic.

Assuming that you have a server and a database setup, we will need to add a connection string in our appsettings.json file. This connection string will provide our application with the correct credential to connect to our database.

To add a local server called mssqllocaldb and a database called my_app_db,  a connection string would look something like this:

appsettings.json
 
"ConnectionStrings": {
    "DefaultConnection": "Server=(localdb)\\mssqllocaldb;Database=my_app_db;Trusted_Connection=True;MultipleActiveResultSets=true"
  }

The connection string vary depending from the setting, authentication and location of the server, so the one above has just been shared to give you an idea. You can have multiple connection strings ( for development, and live environment). In our case the connection string is going to be called DefaultConnection.

Now that we have added the above entry in our appsettings.json file, we are ready to link the database to our app.

This time we are going to insert some code in the Startup.cs file. This file includes all the services and configuration that are going to be made available within the app.

Our database connection is going to be a service, and as such, our code is going to be inserted within the ConfigureServices method.

SQL server service configuration
 
            services.AddDbContext<dbContext>(options =>
                options.UseSqlServer(Configuration.GetConnectionString("DefaultConnection")));

Adding the service is quite straight forward, you just need to add the context name that we have previously defined dbContext, and specify the connection string that we want to use to connect to our DB DefaultConnection. This is all we need to be able to then access the DB within our application.

The example above will connect to SQL server, but there are already options within the .net framework to connect to the most used database servers.

Query the database

Now that everything is linked, query the database is going to be very simple. For example the file below shows the code required to do a select statement (get) and an insert statement (add).

Basic Database operations
 
  1. using System;
  2. using System.Collections.Generic;
  3. using System.Linq;
  4. using System.Threading.Tasks;
  5. using Microsoft.AspNetCore.Mvc;
  6. using My_app.Models;
  7. namespace My_app.Controllers
  8. {
  9. [Route("api/[controller]")]
  10. public class dbController : Controller
  11. {
  12. private readonly dbContext _context;
  13. public dbController(dbContext context)
  14. {
  15. _context = context;
  16. }
  17. [HttpGet]
  18. public IEnumerable<Resource> GetPlayer()
  19. {
  20. return _context.Player.ToList();
  21. }
  22. [HttpGet]
  23. public IEnumerable<Resource> AddPlayer(string name)
  24. {
  25. var player = new Player {
  26. Name = name,
  27. health = 100
  28. };
  29. return _context.Player.Add(player);
  30. }
  31. }
  32. }

Conclusion

In the next few articles we are going to explain how to create a migration script and how to use them to create your tables, we will also explain how to set you database to seed our tables with static data on start up.

This article has covered the basic set up required to get you up an running with SQL server on a .NET Core appto enable you to get started. I really hope you find any of the information shared above useful and I am happy to get any comments to support me in tailor the above information for future readers.

 

What is Version Control in simple terms

In the past few months, I have spent my free time trying to support new developers to find their first commercial experience.

Usually I find that most of these candidates are not far off from being technically ready, but what they really lack is knowledge of the “ecosystem” that experienced developers passively use without even considering it important.

Today I am going to focus on Version Control. This post is not going to explain you how to use the tool ( there are lots of fantastic videos and courses our there than can do that for you), but is going to focus in explaining individual technical terms that are sometimes hard to understand without the proper explanation or example.

What is Version Control

Formally started in the 70s under the name of Source Code Control System, is one of the most used tool across team of developers. The actual definition of Version Control is:

Version control systems are a category of software tools that help a software team manage changes to source code over time.*

The main task achieved by version control is to orchestrate changes of files within the project. In simple words Version control software allow multiple developers to work on the same project at the same time, all without problem or possibility of overriding everyone else work.

The main reason why this tool is not very common in new developers is probably because it does not seem useful for a single developer, and people underestimate its hidden powers.

Master, Origin and the simple commands

To explain it further we are going to use some real life example to explain some of the technical terms used by developer. The example we are going to use is the one of a cloud storage like Dropbox or Google drive. Many of us use them on  day to day bases, and comparing Version Control to it should help in understanding some of the technical aspects and words used.

Origin

When working on a Project, all developers will have a copy of the project locally on their drive, but ORIGIN is a shared copy that holds all the information and is the true copy of the project. If you take the example of google drive, this will be the folder stored in the cloud, that all users of that google drive account can access from their browsers.

Fetch

This is one of the most simple command in Version Control. This will allow a developer to update all the History of the files. This command does not actually changes any file locally, it is just used to show a developer is something is changed in Origin and inform the developer if the version of the files he is working on are up to date or need to be updated.

Using again the example of Google Drive, this happen every time a computer is switched on when the computed Synchronise with the Cloud folder, to see if anyone has changed any file.

Pull

The fetch command is sometimes omitted by developer, that tent to Update the file locally directly using the “pull” command. Issuing a “pull” command will behind the scene “fetch” all the information from “origin” and then it will download all the required changes on the developer machine to make sure that the project files are up to date. A succesful “pull” will mean that your local project files are the same of ORIGIN

Bringing back our simple example above, the Synchronisation of file is usually followed by this file behind downloaded and updated locally and this is what a “pull” does.

Push

This is the last of the simple and most used command. All the above commands are used to grab information from origin, but “push” is used to update ORIGIN with local changes that we have done to files within the project.

A successful push will mean that your copy of the project and Origin are the same. It is important to note, that after someone “push” the code to origin, all other developer need to “pull” to have an up to date version of the code.

Master, Branches, Merges and Merge Conflict

Now that we have covered the simple commands, it is time to start and move to something a bit more complicated. In this art of the article, we are going to build a car to support our thinking and hopefully help explain this hard commands.

Branches

In small small projects all developers will work on the default Branch called Master. This will be the “only” representation of the project. But it is very common for developers to use Multiple branches within the project. Branches are usually used to develop feature, updates that a developer does not want to be part of master.

Merge

Depending form the outcome of our Branch work, we may need to combine the new feature with the main work “master” or in another “branch”. To do so, we need to use a command called “merge”. We usually merge a branch “into” another. More details will be given below with our example.

Merge Conflict

Unfortunately, not always everything ends well. Sometimes when Merging branches we may end up in conflict. This conflict occur when more than one developer have worked on the same file ( more precisely the same line) and the Version Control Software does not know how to deal with the changes. This situation is quite tricky and we will try to cover it in more details with the example below.

The production Line example

We are the proud owners of a single car production line. This establishment is called “master” in which we build our first version of the car.

We are now going to Experiment on the car to create an update on our current model. We take the finished car (pull from master) and we create a specific “branch” in the factory to add a feature to the car, a dashcam. After this feature has been added in my branch, and the engineer are happy with the change we “merge” our work back in the main production line (master). After doing so, every car will now come with a dashcam.

We are now going to carry two experiment at the same time. As previously done, we are going to get the current model form the production line and we are opening two new branches. Branch1 and Branch2.

In branch one we are going to add a new feature, Automatic mirror, in the other Branch, we are going to work on a separate feature called “dark mirror”.

After some time spent in Branch one, we have decided to merge the work back into the main product line. As with our previous example, there is no issue because no one has worked on any feature on this car that affected the mirror ( or at least not in master).

Development on the second branch has now completed and we want to merge the second branch in master. Unfortunately, when trying to merge this new feature in the main production line, we have a “merge conflict”. There are two parts on the same production line that try to change the mirror. One wants to add automatic mirrors and the other wants to make it black. To solve this issue the engineer need to manually decide which feature he wants to keep between the different branches. In our case the Engineer has decided to keep both and will solve the conflict by adding both feature in the main production line.

 

Conclusion

In the above article I have just touched slightly some of the most used commands and terms that are use when using a version control. Using a Version Control software is like driving a car, initially it seems impossible, and everything is extremely hard, but then it becomes muscle memory and it is straight forward.

Unfortunately for me, this is the case now, and it is extremely hard trying to put even simple task in words. I thought this article would be easy for me, but it turned out to be one of the hardest.

If any of you know how to explain this in better terms or has comment of question please do not hesitate to comment below and I will happily edit the article to make it better.

References

*  Version Control definition from website: https://www.atlassian.com/git/tutorials/what-is-version-control

 

 

 

 

 

Write cleaner Javascript code with Eslint

Javascript has a bad reputation, and this is mainly due to the fact that it is too flexible to use. Many users would abuse this great feature of this language writing code that is very inconsistent and hard to follow. For example, simple inconsistency like single or double quote around strings, can make project seem quite unclear and messy.

In recent years, after I have been exposed to different languages that follow a more rigid approach to coding standards, I have started to focus to write cleaner Javascript, and I would like to share some of these experiences.

JavaScript, is especially prone to developer error due to its lack of a compilation process. Linting tools allow developers to discover problems with their JavaScript code without the need of executing it. In this article I will be going into details on how you can  use and configure a lint utility to support you.

What is Eslint

Eslint is a pluggable linting utility created by Nicholas Zokas in June 2013, that can be used to analyse our Javascript code to highlight errors on the fly, supporting a quick and clean development.

There are many linters around, but I am listing below some of the reasons that have supported my decision to choose Eslint:

  • It is open source.
  • Great documentation.
  • Huge number of configurations
  • Good examples
  • Works with all major IDE and Code editors.
  • Can be used from the Command Line
  • Works on Build Server

Getting started with Eslint

Eslint requires Node,js to be installed on your machine, so if you do not have it, please download the latest stable release from the main website.

Install

After Node.js is up and running, we can move to the next step by opening a Command Prompt and installing the Eslint package from NPM (visit the NPM website for more information regarding its usage).

Install eslint package globally
 
  1. npm i -g eslint

Configure

Following the installation above, we now have to define a configuration file called .eslintrc.json. Eslint comes with an handy feature to support you in creating this file. To trigger it, access the root location of your site and type the following command

Create eslint config file
 
  1. eslint --init

This command will return three options:

  1. Ask a couple of questions to define the configuration required
  2. Use a popular styleguide ( this will require an existing NPM package)
  3. Inspect your Javascript Files (not suggested if your coding style is inconsistent)

The above will be displayed in the command prompt like shown by the image below:

eslint --init options

For the purpose of this blog post, I am going to provide my own configuration file that need to be extracted in the root of your site. The file can be downloaded by clicking the following link:

Eslint Configuration File

Run the linter

After successfully complete the above steps, you will now be able to use Eslint by using its extensive CLI (command line interface) commands or within most of the major IDE (mode details to follow). The most basic command would just include the file or path to test, for example the following two example will respectively test file.js and all files within the “site” folder

eslint run
 
  1. eslint "c:/site/file.js"
  2. eslint "c:/site"

Configuration File explained

Now that we have Eslint fully configured, it is time to dive into the configuration file to see how a configuration file is structured, and what configurations are set in our setup. As mentioned above, there are hundreds of configurations, all well documented on the Eslint website.

Root & Environments

Section one
 
"root": true, //this will set this file and folder as the root directoty
    "env": {//load common environment/plugin configuration
        "browser": true,
        "commonjs": true,
        "es6": true,
        "node": true,
        "jquery": true
    }

The first section of our configuration file includes the root setting. This is an optional setting that define the root of your project. Projects can have more than one Eslint configuration file, and when running it, it will search in the current folder and all its ancestors until it finds the root value.

The next setting used is the env (environment). Using this setting will load specific environment setting. For example in the case above we are allowing the use of ES6 notation, therefore we would not have any error triggered if using the arrow notation =>.

Globals

Globals eslint
 
"globals": {
        "exampleGlobalVariableName": true
    }

The global configuration accept a object that will determine global variables that we can use within our JS files, without the need to be declared. Failing to do so, will trigger a no-undef error.

For example, when we added jQuery in the env setting above, has indirectly set a global for $ and jQuery variables. This setting is really useful to support the user to understand unexpected dependencies that you may have, that will result in hard unit testing.

Rules

Most of the configuration have three simple settings.

  • off – This configuration is self explanatory. It will switch the error off.
  • warn – This setting will trigger a Warning error. In many IDE warning are displayed with a Yellow underline.
  • error – This setting will return an error. A file could have some warning, but you should always aim for “error free” files.

Some specific rules have more arguments that can be used, but I will be wasting time trying to explain them, as the Eslint documentation is really complete.

Using Code Editors

Eslint CLI is fantastic, but I do not expect you to run it every single time you modify a file, and even if the linter gives you the line number for it, it would still be hard to use on a daily basis.

Luckily for us, Eslint is very easy to set up in most of the major code editors. In this section of the article, I am going to show how you install the linted on Visual Studio Code, that is my editor of choice.

Visual Studio Code

To use Eslint in the editor, we need to first install the official extension called ESlint. To access the extension page click the Extension button on the left menu or click Ctrl+ Shift + X (on windows).

After installing the above extension, you will be required to restart the editor for the effect to take place.

On restart, assuming your Eslint configuration file is in the correct folder, you will be able to immediately see errors highlighted in real time.

The screen shots below show some of the feature available when enabling the extension.

 

Conclusion

I have personally doubt the need for a tool like this, but now after using it, I cannot live without. The configuration that I have shared is a personal choice and I am more than happy for you to use it if you wish to, but at the same time, you should also dive into the available rules and see if there is anything that fit your guidelines.

Start to use a linter is just the first step to write cleaner code. No matter if you are a single developer or work in a big team. Introducing this tool will increase the quality and readability of your code. We have all been guilty of being sometimes lazy to keep our project consistent and lacking structure, but now the tools are available and we have no excuse but to start and write cleaner Javascript.

 

"