Category

Tutorial

How to fix git error “fatal: bad object HEAD”

During development, I stumbled upon an error in GIT that prevented me to complete any operation that required to contact remote origin (Fetch, push, etc..)

No matter which branch I was on, I would always received the following error when trying any of the above operations:

fatal: bad object HEAD

The repository in question was quite big ( over 4gb) and I wanted to find a solution that did not require me to pull a full copy of the repository down again.

The problem

The source of the issue, is a corrupted file within the GIT folder. There is no special reason for this to happen ( or at least I could not find any reasonable explanation for it), but it is not extremely common.

The code

I am going to share the solution that I have used in my specific case. It solved the issue in just a few seconds.

The following code, need to be run from the location of the affected repository.

cp .git/config .git/config.backup
git remote remove origin
mv .git/config.backup .git/config
git fetch

The explanation

The code above is self explanatory, but for the curious I will explain below line by line.

cp .git/config .git/config.backup

This line uses the command utility for copying files (read more). It is simply make a backup copy of the config file, within the .git folder

git remote remove origin

This line of code uses the remote feature of git, that has the main duty of
managing set of track repository. As explained on the official Git documentation , the remove command is used to remove all remote-tracking branches and configuration settings.

This last line of code, is the actual solution to our problem. By removing all configuration and existing files of remote branches, we remove our corrupted files.

mv .git/config.backup .git/config

We are not going to use another command utility, the one for moving files MV, as explained on the wikipedia website. The command is restoring our previously backed up config file. This step is needed to re-set all the remote branches.

git fetch

If you have ever used GIT, you will have come across the FETCH command. Running this command, will recreate all the files that we have previously removed. GIT is going to use the information within the config file, to know what branches and tags should be fetched from remote.

Conclusion

The above code helped me, and I hope it will support you in solving your issues. Please feel free to post any comment and/or suggestion to improve the above fix.

Frontend Job interview research

Need of change

Last year, during our Front End chapter meeting, we agreed to invest some time to improve our front end interview format, as it was not really fit for purpose and did not really supported us in making the right decision about candidates.

The interview format in use involved a coding test, focused in making an ajax request with some added validation, followed by some adHoc question that were tailored on the individual.

Many people will probably see nothing wrong with the test, as it actually provided both coding visibility and the ability to ask questions. Unfortunately after careful research, me and my colleagues found the test unsuitable for the following reasons:

  • It tested mainly javascript when our company looked for a wide range of skills such as HTML, CSS, Javascript, accessibility, Framework ( react, vue)
  • It required developers to “work under pressure” and not everyone is good at that, and it was not a “requirement” for our vacancies.
  • The adHoc questions where too different to compare overtime, and it was hard to define candidates level on them.

After the above “problems” were highlighted, we started to collate all ideas and change our current process to create something that would fit our company requirements.

The different ideas

The new test required the involvement of everyone, so we opened up a survey to ask people what they thought was a cool test, and what did they dislike about it. The ideas that I received were amazing, most of them interesting.

Some of the ideas that were forwarded were:

  1. Create a set of question from 30secondsofinterviews.org
  2. Ask candidates to complete the FizzBizzTest
  3. Ask candidates to bring a project that they were proud of to discuss during the interview
  4. Ask them to complete a test in the house ( replicate a website page).
  5. Give candidates multiple tests, allowing them to choose what they knew best
  6. Live bug fixing

The ideas above were all great, and choose between then has been a hard and lengthy process. At first all of the above may seem good, but each of them had a problem. Either it would focus too much on a specific skill (2) or have legal complication, because they could not share the code (3), or could really be misleading as people could prepare or cheat (1, 5).

Disclaimer

I would like to start this final paragraph by said that what I am going to share with you is not perfect, and it could actually not fit your company at all, but so far we had great feedback from everyone that has completed it, and has allowed us to interview a wide range of candidates, from Graduates to senior, without to need to adapt it.

The result

The final test was a mix of almost all the above suggestion, and has been carefully made ( and is actively tuned with new candidates feedback). The new test has 3 main parts:

  1. Home exercise
  2. Interview bug fixing
  3. Questions

Home exercise

I think everyone could agree with me, in saying that interview are very stressful. You are all dressed up and uncomfortable and scared of doing anything wrong (eg. using google to search for info) and wanting to give the good impression, that sometimes lead to the opposite result. To avoid this points, we all collectively thought that we had abolish having to complete a full exercise while under pressure.

The new proposed solution has the following information

  • Complete a specific exercise ( I am not allowed to give too many details).
  • The exercise will include basic requirement ( basic HTML, CSS and JS)
  • The candidate is asked to complete 2 more points from a pool of 6 specific topic ( responsive design, Advanced Javascript, Unit test, Accessibility, Advanced SASS, JS library Vue or React).
  • The candidate is provided with a ready to use zip ( just need to install node and type npm install).
  • The candidates is asked to spend no more than 3 hours on the exercise, and depending from their availability ( full time or unemployed) we give them the “expected” time to be able to fit the work ( a week or a couple of days).

The main point to get from the above breakdown, is offering the possibility to candidates to choose what skill they like most. We had candidates that just wanted to focus on the HTML make the design beautiful and clean, and others that wanted to show off their Javascript skills, by writing the above with full test coverage and completing the Advanced JS request.

Interview bug fixing

All developers know well that one of the most important skills in programming, is the ability to fix bugs. My team has thought that we needed to introduce a step in our interview that would avoid people from cheating ( asking someone else to create the above exercise for them).

We thought to “ask questions” about the exercise to candidates, but we knew to well that they people could “study” the implementation and be able to fool us. So we decided to ask developers to “fix some bugs”, more precisely, we would “break” their own exercise in a couple of places, and ask them to fix it.

Depending from the level of confidence we either do this together with the candidates, or leave them some space to sort it out.

We found this “step” to be very informative for us. It provides us with the “confidence level of the candidates”, it is less stressful that live coding ( as they are working on their own code), and it also show us their problem solving skills. I have to admit that his is probably the part I love the most.

Questions

Unfortunately, until now, candidates test would have been very hard to compare and contrast. We still needed something for different interviewer to be able to “understand” people level, without having to open up tens of projects.

Our final decision was a set of question that we have all build from the group up ( this was the hardest part to agree). The questions are divided in “topics” and “level”.

The topics follow the same distinction of the one provided int he home exercise, plus a few extra (like git, agile, etc..). Each topic has 3-4 questions, all divided by level.

The above distinctions allows us to provide the right questions to the right individual ( knowing their preferences from the exercise and from the “live bug fixing”.

This questions are not really defined, they are just “placeholder” for specific information. For example a JS question could be “ES6”, and a CSS one could be “responsive”. It is the interviewer discretion to ask specific question depending from the discussion they had and the code they have seen. For example what is the difference between “let and const” and “how do you use media queries”.

Each questions are written down by the interviewer, and then the answer level is provided (good, basic, knowledgeable, not know). Writing this single word instead that the complete answer, allows us to be able to “compare” candidates and understand their fir within the company ( I am aware that due to the nature of the OPEN questions, it is not a real comparison, but it provide a good idea of strength and weaknesses).

Summary

As mentioned above, this interview seem to be working very well for us. I have not only used it for external candidates, but due to its nice progressive structure, it has also been used for internal “developer programs” and “graduate training”.

We have now build a pool of over 15 different exercise and responses, that is really supporting us in making “good” decision of the candidates. Since the introduction of the above exercise, we also seem to be able to allocate candidate in the right spot, due to the more defined skills information that the test provide us.

I would be very happy to receive any feedback here by commenting below or on twitter. All feedback both positive or negative are welcome, because our real focus is to make our interview as smooth and stress-free as possible to all our candidates.

 

How to debug Jasmine-es6 in visual studio code

This article is going to show the configuration required to debug Jasmine-ES6 in Visual studio code.
Jasmine-ES6 is a Helpers and overrides that augment Jasmine for use in an ES6+ environment. It is great when you have a project that you do not want to transpile using babel. And it turned out to be one of the NPM package that was used in one of the latest project in which I was involved.
Due to the nature of the plugin, it is not possible to Debug Jasmine-es6 directly in the browser, but it is possible by using the debug feature provided by Visual Studio Code. The settings that are going to be provided below, will actually work to emulate any NPM command that you are currently using.

Create a debug configuration file in Visual Studio Code.

Visual studio code enables use ( sometimes with the use of extension) to debug almost any coding language (js, c#, php,ect..).

To access the Debug page we need to click the “bug” icon on the left hand menu.

Now that we have accessed the debugging page, we are able to add our configuration. To do so, click on the dropdown next to the Green arrow, like shown in the image below.

Visual Studio Code (VSC) will provide you a list of “predefined” debugging configuration that will support you in completing the setup. In our case we are going to use the “launch program” option.
visual studio code available configuration
Our configuration file will look something like this:
Visual studio Code basic debug file
 
  1. {
  2. "version": "0.2.0",
  3. "configurations": [
  4. {
  5. "type": "node",
  6. "request": "launch",
  7. "name": "Launch Program",
  8. "program": "${workspaceRoot}/app.js"
  9. }
  10. ]
  11. }
The configuration can have multiple entry that can be accessed by the dropdown previously used.

Setting the config

The config requires two main information. The first is the Program that we would like to run, this can actually be changed with whatever program you are currently running from the command line. When writing a command you will probably just use the name of the package ( depending how it is installed ), for example “Jasmine init”.

Node will automatically know that you are looking in reality for a package within the node_modules folder called Jasmine. Unfortunately our Debug configuration file is not that clever and will require you to specify the complete path.
You can use ${workspaceFolder} to select the workspace root, and then form the rest of the path required to reach the entry js file of your package. In the case of Jasmine-es6 the path will look something like:
jasmine-es6 path
 
  1. "${workspaceRoot}/node_modules/jasmine-es6/bin/jasmine.js"
Running the above is the equivalent of running the command Jasmine-es6 in the command line. This will work, but in our case we want to be more specific and actually just run a specific spec file.
In a command line scenario I would run the following line:
Jasmine command line
 
jasmine-es6 "/tests/Tests1spec.js"
To add parameter in our configuration we need to use the specify the args array:
Args array
 
  1. "args": [
  2. "${workspaceFolder}\\tests\\Tests1spec.js"
  3. ]
If you use backslash instead than forward slash, you will have to escape them ( as shown above)

Conclusion

The above post is aimed at supporting you and hopefully save you some time. The debugging feature of Visual Studio Code are quite extensive ( I debugger PHP in the past and it worked perfectly).  Not that everything is set up, you can start debugging by clicking the green arrow in the debug page, or just by pressing F5 from your keyboard (make sure to add breakpoint where you would like the add to break).

There may be better method to debug, and most people would have webpack setup to support them in the traspilation and test run, but I wanted to go against current and try something different.

As always I am happy to receive any comment that can support the future readers.

I complete the post wit the complete file below:

Node Program debug in Visual Studio Code
 
  1. {
  2. // Use IntelliSense to learn about possible Node.js debug attributes.
  3. // Hover to view descriptions of existing attributes.
  4. // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
  5. "version": "0.2.0",
  6. "configurations": [
  7. {
  8. "type": "node",
  9. "request": "launch",
  10. "name": "Launch Program",
  11. "program": "${workspaceRoot}/node_modules/Jasmine-es6/bin/jasmine.js",
  12. "args": [
  13. "${workspaceRoot}/tests/Test1spec.js"
  14. }
  15. ]
  16. }

 

How to Create a Database migration with Entity Framework

This article is the continuation of a series started with the first article explaining how to setup SQL server with Entity Framework using .NET CORE.

This post is going to explain how to create a migration script with the use of Entity Framework tools and then we are going to use this migration to create all the tables on our local database.

Entity Framework

Entity Framework is an open-source ORM framework for .NET applications supported by Microsoft. It enables developers to work with data using objects of domain specific classes without focusing on the underlying database tables and columns where this data is stored. With the Entity Framework, developers can work at a higher level of abstraction when they deal with data, and can create and maintain data-oriented applications with less code compared with traditional applications.

The sentence above, perfectly summarise the power of this ORM framework. It supports developers in one of the most tedious tasks.

In this chapter we require three different NuGet packages.

  1. The Database provider
  2. The Entity Framework tools

All the packages below will be installed using the Package Manager Console. To access it, using Visual Studio please go to:

Tools > NuGet Package Manager > Package Manager Console

The database provider

In our example we are going to use SQL (As we have already defined in the previous posts), but there is a list of available Database Provider 

To install our package we will have to type the following command:

Database Provider
 
  1. Install-Package Microsoft.EntityFrameworkCore.SqlServer

Entity Framework Tools

Entity Framework is powerful by itself, but we also have a set of tools to make it even greater.

The tools can be installed by using the following command in Package Manager Console:

EntityFrameoworkCore
 
  1. Install-Package Microsoft.EntityFrameworkCore.Tools

Create a migration script

After all of the above steps, creating a migration script is very straight forward.

Using our previously installed packages we can run the following command in the Package Manager Console

Migration Script
 
  1. Add-Migration InitialMigrationScript

 

During the above step you may receive an error stating:

Migration Error
 
  1. The term 'add-migration' is not recognized as the name of a cmdlet

If this error appear you just have to close and reopen visual studio.

The code above will create a migration script called InitialMigrationScript and place in a folder called Migrations.

The file is actually pretty readable and it is useful to analyse to be able to modify and create one in the future without the help of the assistant if necessary.

Run a migration

This is our final step. After which, our table will be fully set and ready to be used within our app.

To create the tables, we have to run the migration file previously created using one of the package installed in the initial steps.

To run the migration run the following command in the Package Manager Console

Create Tables
 
  1. Update-Database

Conclusion

We are not at the end of this little tutorials. The above steps should have allowed us to create a full migration script that can be used to have a consistent database schemer across environment and to developers plenty of time.

I hope this can be helpful to an of you, as I personally spent some time to be able to put this together and find all the information I needed.

As mentioned above and in my previous post, I am sharing my finding while I use them while I am using this technologies in side projects, and I am more than happy to receive feedback that can help me improve it.

 

How to setup SQL server with Entity Framework using .NET CORE

This article is the first of a series that is aimed to explain how to use familiar technologies with .NET CORE.

The aim of this post is to create a Database from scratch using Entity Framework. We will configure SqlServer to run with our core app and finally touch upon table relationship in Entity Framework.

Disclaimer: This is the first time I use entity Framework and .NET Core, so I just trying to share my “findings” and it is by no mean the “best” solution available. I am more than open to suggestion if I can make the code below faster and cleaner.

Getting Started

To be able to follow along with this article, you will need to have a project set up and ready. If you do not have one, I would suggest you to create a new project (Web Api preferred) by using the newly released Templates available cross platform. To enable this you will need to install the newest core package for visual studio.

In our case we are going to use a newly created Web API build on Core 2.0.

Tables

One of the hardest task to complete, before you can create a full set of tables in a relationship database ( like Sql and MySql) it is to create a full map of the tables. It may seem clean in your head what your app needs, but I greatly suggest you to create a complete map of all the tables, columns and its relationship on paper to make sure that you have thought at everything.

You will be able to change things later on, but database architecture it is very important for the performance of your application.

In our case we are going to create a player table and an inventory table.

The tables will have the following columns

Player

  • (int) PlayerId
  • (string) Name
  • (decimal) health

Inventory

  • (int) InventoryId
  • (int) PlayerId
  • (string) Name

The first step is to create a Model class for each of the above table. It is good practice to have all models together within a folder placed in the Root of our application called Models.

Now that the folder is in place, we can two files, and call them Player.cs and Inventory.cs.

Player.cs
 
namespace My_game
{
    public class Player
    {
        public long PlayerId { get; set; }
        public string Name { get; set; }
        public decimal health { get; set; }
    }
}

The above model is quite simple. If you are familiar with c# at all, you have surely created a file that looked like this in the past. The power Entity Framework will use the above class to create the table and support us in mapping our future database queries.

Now we need to create another file that will include the Inventory class. This is going to be slightly different than the one above, as we this class is expecting to have a “relationship” with the player class, because each player will be able to have many pieces of inventory.

Inventory.cs
 
namespace My_game
{
    public class Inventory
    {
        public long InventoryId { get; set; }
        public decimal Name { get; set; }
        virtual Player Player { get; set; }
    }
}

As shown by the above code, to add a relationship you just need to use a “virtual” entry with the use of the recently created “player” model.

As you may have noticed, we have not specified any unique identified when creating the models, and not any specification on where the relationship of the tables lies.

The magic is in the names. Entity Framework, if not specified, will expect the Unique Identifier to be either called Id, or the class name appended by Id (eg. InventoryId, PlayerId).

Relationship are handled a very similar way, it assume the tables are connected by using its unique identifier, and as with the above case, we are free to change the default ( this will not be covered in this article).

Context

Now that we have a couple of tables in action, it is time to fit them together we need to use System.Data.Entity.DbContext class (often refereed as context). This class is responsible of creating a complete picture of the database.

For this example we are going to create a context file called dbContext.cs. This file will include the models created above and will look like the following snippets:

dbContext.cs
 
namespace My_game.Models
{
    public class dbContext : DbContext
    {
        public dbContext(DbContextOptions<dbContext> options)
            : base(options)
        {
        }
        public DbSet<Player> Player { get; set; }
        public DbSet<Inventory> Inventory { get; set; }
    }
}

The above class is going to be used in the next few chapter with the use of Entity Framework tools to create a migrations script that will eventually create our database tables.

Configure SQL server

Now that all our models and the context have been fully developed, we are ready to configure SQL server. This article is not going to explain how to set up an SQL server and create a Database, and it is out of scope, but plenty or resources can easily be found on this topic.

Assuming that you have a server and a database setup, we will need to add a connection string in our appsettings.json file. This connection string will provide our application with the correct credential to connect to our database.

To add a local server called mssqllocaldb and a database called my_app_db,  a connection string would look something like this:

appsettings.json
 
"ConnectionStrings": {
    "DefaultConnection": "Server=(localdb)\\mssqllocaldb;Database=my_app_db;Trusted_Connection=True;MultipleActiveResultSets=true"
  }

The connection string vary depending from the setting, authentication and location of the server, so the one above has just been shared to give you an idea. You can have multiple connection strings ( for development, and live environment). In our case the connection string is going to be called DefaultConnection.

Now that we have added the above entry in our appsettings.json file, we are ready to link the database to our app.

This time we are going to insert some code in the Startup.cs file. This file includes all the services and configuration that are going to be made available within the app.

Our database connection is going to be a service, and as such, our code is going to be inserted within the ConfigureServices method.

SQL server service configuration
 
            services.AddDbContext<dbContext>(options =>
                options.UseSqlServer(Configuration.GetConnectionString("DefaultConnection")));

Adding the service is quite straight forward, you just need to add the context name that we have previously defined dbContext, and specify the connection string that we want to use to connect to our DB DefaultConnection. This is all we need to be able to then access the DB within our application.

The example above will connect to SQL server, but there are already options within the .net framework to connect to the most used database servers.

Query the database

Now that everything is linked, query the database is going to be very simple. For example the file below shows the code required to do a select statement (get) and an insert statement (add).

Basic Database operations
 
  1. using System;
  2. using System.Collections.Generic;
  3. using System.Linq;
  4. using System.Threading.Tasks;
  5. using Microsoft.AspNetCore.Mvc;
  6. using My_app.Models;
  7. namespace My_app.Controllers
  8. {
  9. [Route("api/[controller]")]
  10. public class dbController : Controller
  11. {
  12. private readonly dbContext _context;
  13. public dbController(dbContext context)
  14. {
  15. _context = context;
  16. }
  17. [HttpGet]
  18. public IEnumerable<Resource> GetPlayer()
  19. {
  20. return _context.Player.ToList();
  21. }
  22. [HttpGet]
  23. public IEnumerable<Resource> AddPlayer(string name)
  24. {
  25. var player = new Player {
  26. Name = name,
  27. health = 100
  28. };
  29. return _context.Player.Add(player);
  30. }
  31. }
  32. }

Conclusion

In the next few articles we are going to explain how to create a migration script and how to use them to create your tables, we will also explain how to set you database to seed our tables with static data on start up.

This article has covered the basic set up required to get you up an running with SQL server on a .NET Core appto enable you to get started. I really hope you find any of the information shared above useful and I am happy to get any comments to support me in tailor the above information for future readers.

 

"