How to get started with IndexDb, and why..

If you have never heard of IndexDb before, do not worry, you are not going to be the first. IndexDb is a low level api that provide a non relational database, directly in your browser. This browser feature can support development in writing fully featured PWA ( progressive web app), with offline support.

In today’s article, we are going to build a service that will allow us to provide offline support for our blog. The article is going to cover the following:

  • IndexDb introduction
  • Introduction to IDB – IndexDb with Promises
  • Database setup
  • CRUD methods (Create, read, update, delete)
  • Debugging

IndexDb introduction

As briefly mentioned above, IndexDb API is not widely used from many developers, but it has actually been around for quite some time.

The use of this browser feature is usually associated with development of PWA, and more precisely to provide offline support. For my personal opinion, the main features of this API are the ability to be used within the Service worker and the possibility to store BLOB – files within its store ( tables).

Support and Space limits

If you are worried about support and or the maximum size of this database, you are going to be reassured quickly.

IndexDb browser support is quite extensive, in fact it covers over 95% of the global browser usage, as shown on These statistics are very reassuring, as they are as extensive as any current framework ( vue, react, angular).

IndexDb support screenshot from
IndexDb browser support screenshot from

Even if I would suggest you all to always try to keep its usage and size to a minimum, the API comes with a considerable space quota. The avaialble amount of space that can be used by the IndexDb ( and other low level api), is browser dependent. These amounts has changed quite a few times in the last few years, but the table below, shows the current quota ( as of november 2019), gathered from the official google developer site.

table showing the store amount for indexDb in current browser. Crhome < 6%, Firefox < 10%, Safari < 50mb, ie10 < 250mb
Available space for Indexdb in modern browser from

Non relational database

If you have just used relational database, such as Sql or MySql, then the use of this feature, may take some time to being fully understood.

The main different is the “loose” definition of tables column and type, but a full article explaining the differences and its usage can be found on pluralsight.

Basic features

In this section we are going to introduce some of the vocabulary that will be used later in the course of this article.


If you have ever worked with any sort of database, you have surely heard of Tables. When using IndexDb this are called Stores.


Due to the great flexibility provided by non relational database ( ability to save different data structure ), it is vital to use its versioning feature.


This is not actually a feature and/or a feature name. But just a warning to remind you that ALL features offered by IndexDb are asyncronous, and currently natively set using callbacks and events (we are going to use a library to help us with this).


This low level API has the ability to create indexes within their stores. If you have never heard of indexes, in simple words, they allow you to “quickly” find entries within your store.

Introduction to IDB

IndexDb is a fantastic feature, and I really suggest you all to get your hand upon it and start using it. Unfortunately, it has a very big disadvantage, it is event driven, and it does not fit current development methodology (promises and async/await).

To help us in that, we are going to use a simple library called idb ( ). There are many library out there that provide greater support, but I am faithful that the actual API is going to develop overtime, and starting to use a “fully featured framework”, may not be necessary in the near future.

This library can either be installed by NPM

npm install idb
import { openDB, deleteDB, wrap, unwrap } from 'idb';

async function doDatabaseStuff() {
  const db = await openDB(…);

or use it directly from unpkg

<script type="module">
  import { openDB, deleteDB, wrap, unwrap } from '';

  async function doDatabaseStuff() {
    const db = await openDB(…);

Database Setup

It is now time to start and code our application. We are going to build a simple service that can be used to store and fetch out posts ( and as mentioned above), be used to provide content for our site in case of offline usage.

In our first step, we are going to create a database. With IndexDb we can create as many database as we want ( in fact, you may already have a few created by packages and libraries).

To create a database we need to use the openDB method. This method is going to try and open the DB, and in the case in which the database or its current version is currently available, it will “create an instance of it”.

async function _init(DB_NAME, VERSION_NUMBER) {
  _db = await openDB(DB_NAME, VERSION_NUMBER, {
    updated(){ //callback used to define the new database instance }

_init("Blog", 1)

The above code would work as intended, as indexDB does not you to set up individual tables and column as other conventional databases.

But there may be time, when you may need to define more control to the store (table) of your database.

As I previously mentioned, the upgrade callback is the method called to support us in creating and or upgrading existing database.

For example, with the following snippets, we are going to create a couple of store (blogs, authors), and set an auto increment column called ID.

upgrade(db) {
  const storeName = ["blogs", "authors"]; =>{  
    if (!db.objectStoreNames.contains(storeName)) {
      db.createObjectStore(storeName, { keyPath: "id", autoIncrement:true });

Due to the nature of a JSON based database, this is all that is needed, and we do not need to go in details for each of the columns. The above code would be enough to run CRUD operations on the store created.

CRUD operations

It is not time to fill our database with data, and luckily this is quite simple with IDB. We are going to complete the following actions:

  • Create
  • GetAll
  • GetOne
  • Delete
  • Count


In our first action, we are going to insert some data into our database. To achieve such an action, we are going to use the PUT method.

const entry = {
  "title": "my blog title",
  "content": "# my content markdown"

_db.put("blog", data);

GetAll and GetOne

Now that our table is starting to be filled with data, it is time to have a way to “retrieve” the data. To fetch the full content of a table, we can use the GetAll method, otherwise we would be able to retrieve a specific entry with the use of the Get method and an unique ID, and we have previously specify as the ID column with the use of the keypath method during our store creation.

IndexDB action are asynchronous, and for this reason, we are going to use the async / await methodology to wait for the promise issued by our wrapper IDB.

//get all entry
const blogs = await _db.getAll("blogs");

get a specific entry
const blog = await _db.get("blogs", 1)


There would be time when we need to remove some of the entry within our database.

Luckily, this is going to be as easy as create it. The use of the Delete method, with a specific ID would be sufficient to achieve our needs:

//delete a specific entry
_db.delete("blogs", 1);


IndexDb provides us with some out of the box functionality such as the count method. This is going to return the number of entry within a specific store.

const storeEntryCount = await _db.count("blogs");


Debugging, for me, is one of the most important feature of any new technologies I ever try out.

I am personally the kind of developer, that need to make “mistakes” to fully learn, but I also need a way to “see” this mistakes to fully learn from them.

As it turned out, IndexDb is extremely simple to debug, as you will surely already have the debugging tools needed to improve your skills: Google Chrome.

By being a developer, I assume that you are already familiar with the Chrome debugger.

Chrome debugger – Application tab – indexDb

IndexDb feature can be found within the Application tab of the debugging tool. If you have ever worked with “Local Storage” and/or “Session storage”, then introducing yourself to indexDb is going to be extremely simple.

Chrome provides us with everything we need to really make the most of this Database, with the simple UI that only google can provide.


IndexDb is a very powerful resource, but it lacks the resources necessary for every developer to start and use it.

If you are willing to have a little struggle with not enough resources, but are eager to get ahead on the “offline support” development, than I really hope that this blog post will provide you with enough information to get you started.

If you have any question about this article, and/or about the technology above mentioned, please do not hesitate to get in touch.

How to fix git error “fatal: bad object HEAD”

During development, I stumbled upon an error in GIT that prevented me to complete any operation that required to contact remote origin (Fetch, push, etc..)

No matter which branch I was on, I would always received the following error when trying any of the above operations:

fatal: bad object HEAD

The repository in question was quite big ( over 4gb) and I wanted to find a solution that did not require me to pull a full copy of the repository down again.

The problem

The source of the issue, is a corrupted file within the GIT folder. There is no special reason for this to happen ( or at least I could not find any reasonable explanation for it), but it is not extremely common.

The code

I am going to share the solution that I have used in my specific case. It solved the issue in just a few seconds.

The following code, need to be run from the location of the affected repository.

cp .git/config .git/config.backup
git remote remove origin
mv .git/config.backup .git/config
git fetch

The explanation

The code above is self explanatory, but for the curious I will explain below line by line.

cp .git/config .git/config.backup

This line uses the command utility for copying files (read more). It is simply make a backup copy of the config file, within the .git folder

git remote remove origin

This line of code uses the remote feature of git, that has the main duty of
managing set of track repository. As explained on the official Git documentation , the remove command is used to remove all remote-tracking branches and configuration settings.

This last line of code, is the actual solution to our problem. By removing all configuration and existing files of remote branches, we remove our corrupted files.

mv .git/config.backup .git/config

We are not going to use another command utility, the one for moving files MV, as explained on the wikipedia website. The command is restoring our previously backed up config file. This step is needed to re-set all the remote branches.

git fetch

If you have ever used GIT, you will have come across the FETCH command. Running this command, will recreate all the files that we have previously removed. GIT is going to use the information within the config file, to know what branches and tags should be fetched from remote.


The above code helped me, and I hope it will support you in solving your issues. Please feel free to post any comment and/or suggestion to improve the above fix.

Frontend Job interview research

Need of change

Last year, during our Front End chapter meeting, we agreed to invest some time to improve our front end interview format, as it was not really fit for purpose and did not really supported us in making the right decision about candidates.

The interview format in use involved a coding test, focused in making an ajax request with some added validation, followed by some adHoc question that were tailored on the individual.

Many people will probably see nothing wrong with the test, as it actually provided both coding visibility and the ability to ask questions. Unfortunately after careful research, me and my colleagues found the test unsuitable for the following reasons:

  • It tested mainly javascript when our company looked for a wide range of skills such as HTML, CSS, Javascript, accessibility, Framework ( react, vue)
  • It required developers to “work under pressure” and not everyone is good at that, and it was not a “requirement” for our vacancies.
  • The adHoc questions where too different to compare overtime, and it was hard to define candidates level on them.

After the above “problems” were highlighted, we started to collate all ideas and change our current process to create something that would fit our company requirements.

The different ideas

The new test required the involvement of everyone, so we opened up a survey to ask people what they thought was a cool test, and what did they dislike about it. The ideas that I received were amazing, most of them interesting.

Some of the ideas that were forwarded were:

  1. Create a set of question from
  2. Ask candidates to complete the FizzBizzTest
  3. Ask candidates to bring a project that they were proud of to discuss during the interview
  4. Ask them to complete a test in the house ( replicate a website page).
  5. Give candidates multiple tests, allowing them to choose what they knew best
  6. Live bug fixing

The ideas above were all great, and choose between then has been a hard and lengthy process. At first all of the above may seem good, but each of them had a problem. Either it would focus too much on a specific skill (2) or have legal complication, because they could not share the code (3), or could really be misleading as people could prepare or cheat (1, 5).


I would like to start this final paragraph by said that what I am going to share with you is not perfect, and it could actually not fit your company at all, but so far we had great feedback from everyone that has completed it, and has allowed us to interview a wide range of candidates, from Graduates to senior, without to need to adapt it.

The result

The final test was a mix of almost all the above suggestion, and has been carefully made ( and is actively tuned with new candidates feedback). The new test has 3 main parts:

  1. Home exercise
  2. Interview bug fixing
  3. Questions

Home exercise

I think everyone could agree with me, in saying that interview are very stressful. You are all dressed up and uncomfortable and scared of doing anything wrong (eg. using google to search for info) and wanting to give the good impression, that sometimes lead to the opposite result. To avoid this points, we all collectively thought that we had abolish having to complete a full exercise while under pressure.

The new proposed solution has the following information

  • Complete a specific exercise ( I am not allowed to give too many details).
  • The exercise will include basic requirement ( basic HTML, CSS and JS)
  • The candidate is asked to complete 2 more points from a pool of 6 specific topic ( responsive design, Advanced Javascript, Unit test, Accessibility, Advanced SASS, JS library Vue or React).
  • The candidate is provided with a ready to use zip ( just need to install node and type npm install).
  • The candidates is asked to spend no more than 3 hours on the exercise, and depending from their availability ( full time or unemployed) we give them the “expected” time to be able to fit the work ( a week or a couple of days).

The main point to get from the above breakdown, is offering the possibility to candidates to choose what skill they like most. We had candidates that just wanted to focus on the HTML make the design beautiful and clean, and others that wanted to show off their Javascript skills, by writing the above with full test coverage and completing the Advanced JS request.

Interview bug fixing

All developers know well that one of the most important skills in programming, is the ability to fix bugs. My team has thought that we needed to introduce a step in our interview that would avoid people from cheating ( asking someone else to create the above exercise for them).

We thought to “ask questions” about the exercise to candidates, but we knew to well that they people could “study” the implementation and be able to fool us. So we decided to ask developers to “fix some bugs”, more precisely, we would “break” their own exercise in a couple of places, and ask them to fix it.

Depending from the level of confidence we either do this together with the candidates, or leave them some space to sort it out.

We found this “step” to be very informative for us. It provides us with the “confidence level of the candidates”, it is less stressful that live coding ( as they are working on their own code), and it also show us their problem solving skills. I have to admit that his is probably the part I love the most.


Unfortunately, until now, candidates test would have been very hard to compare and contrast. We still needed something for different interviewer to be able to “understand” people level, without having to open up tens of projects.

Our final decision was a set of question that we have all build from the group up ( this was the hardest part to agree). The questions are divided in “topics” and “level”.

The topics follow the same distinction of the one provided int he home exercise, plus a few extra (like git, agile, etc..). Each topic has 3-4 questions, all divided by level.

The above distinctions allows us to provide the right questions to the right individual ( knowing their preferences from the exercise and from the “live bug fixing”.

This questions are not really defined, they are just “placeholder” for specific information. For example a JS question could be “ES6”, and a CSS one could be “responsive”. It is the interviewer discretion to ask specific question depending from the discussion they had and the code they have seen. For example what is the difference between “let and const” and “how do you use media queries”.

Each questions are written down by the interviewer, and then the answer level is provided (good, basic, knowledgeable, not know). Writing this single word instead that the complete answer, allows us to be able to “compare” candidates and understand their fir within the company ( I am aware that due to the nature of the OPEN questions, it is not a real comparison, but it provide a good idea of strength and weaknesses).


As mentioned above, this interview seem to be working very well for us. I have not only used it for external candidates, but due to its nice progressive structure, it has also been used for internal “developer programs” and “graduate training”.

We have now build a pool of over 15 different exercise and responses, that is really supporting us in making “good” decision of the candidates. Since the introduction of the above exercise, we also seem to be able to allocate candidate in the right spot, due to the more defined skills information that the test provide us.

I would be very happy to receive any feedback here by commenting below or on twitter. All feedback both positive or negative are welcome, because our real focus is to make our interview as smooth and stress-free as possible to all our candidates.


How to debug Jasmine-es6 in visual studio code

This article is going to show the configuration required to debug Jasmine-ES6 in Visual studio code.
Jasmine-ES6 is a Helpers and overrides that augment Jasmine for use in an ES6+ environment. It is great when you have a project that you do not want to transpile using babel. And it turned out to be one of the NPM package that was used in one of the latest project in which I was involved.
Due to the nature of the plugin, it is not possible to Debug Jasmine-es6 directly in the browser, but it is possible by using the debug feature provided by Visual Studio Code. The settings that are going to be provided below, will actually work to emulate any NPM command that you are currently using.

Create a debug configuration file in Visual Studio Code.

Visual studio code enables use ( sometimes with the use of extension) to debug almost any coding language (js, c#, php,ect..).

To access the Debug page we need to click the “bug” icon on the left hand menu.

Now that we have accessed the debugging page, we are able to add our configuration. To do so, click on the dropdown next to the Green arrow, like shown in the image below.

Visual Studio Code (VSC) will provide you a list of “predefined” debugging configuration that will support you in completing the setup. In our case we are going to use the “launch program” option.
visual studio code available configuration
Our configuration file will look something like this:
Visual studio Code basic debug file
  1. {
  2. "version": "0.2.0",
  3. "configurations": [
  4. {
  5. "type": "node",
  6. "request": "launch",
  7. "name": "Launch Program",
  8. "program": "${workspaceRoot}/app.js"
  9. }
  10. ]
  11. }
The configuration can have multiple entry that can be accessed by the dropdown previously used.

Setting the config

The config requires two main information. The first is the Program that we would like to run, this can actually be changed with whatever program you are currently running from the command line. When writing a command you will probably just use the name of the package ( depending how it is installed ), for example “Jasmine init”.

Node will automatically know that you are looking in reality for a package within the node_modules folder called Jasmine. Unfortunately our Debug configuration file is not that clever and will require you to specify the complete path.
You can use ${workspaceFolder} to select the workspace root, and then form the rest of the path required to reach the entry js file of your package. In the case of Jasmine-es6 the path will look something like:
jasmine-es6 path
  1. "${workspaceRoot}/node_modules/jasmine-es6/bin/jasmine.js"
Running the above is the equivalent of running the command Jasmine-es6 in the command line. This will work, but in our case we want to be more specific and actually just run a specific spec file.
In a command line scenario I would run the following line:
Jasmine command line
jasmine-es6 "/tests/Tests1spec.js"
To add parameter in our configuration we need to use the specify the args array:
Args array
  1. "args": [
  2. "${workspaceFolder}\\tests\\Tests1spec.js"
  3. ]
If you use backslash instead than forward slash, you will have to escape them ( as shown above)


The above post is aimed at supporting you and hopefully save you some time. The debugging feature of Visual Studio Code are quite extensive ( I debugger PHP in the past and it worked perfectly).  Not that everything is set up, you can start debugging by clicking the green arrow in the debug page, or just by pressing F5 from your keyboard (make sure to add breakpoint where you would like the add to break).

There may be better method to debug, and most people would have webpack setup to support them in the traspilation and test run, but I wanted to go against current and try something different.

As always I am happy to receive any comment that can support the future readers.

I complete the post wit the complete file below:

Node Program debug in Visual Studio Code
  1. {
  2. // Use IntelliSense to learn about possible Node.js debug attributes.
  3. // Hover to view descriptions of existing attributes.
  4. // For more information, visit:
  5. "version": "0.2.0",
  6. "configurations": [
  7. {
  8. "type": "node",
  9. "request": "launch",
  10. "name": "Launch Program",
  11. "program": "${workspaceRoot}/node_modules/Jasmine-es6/bin/jasmine.js",
  12. "args": [
  13. "${workspaceRoot}/tests/Test1spec.js"
  14. }
  15. ]
  16. }


How to Create a Database migration with Entity Framework

This article is the continuation of a series started with the first article explaining how to setup SQL server with Entity Framework using .NET CORE.

This post is going to explain how to create a migration script with the use of Entity Framework tools and then we are going to use this migration to create all the tables on our local database.

Entity Framework

Entity Framework is an open-source ORM framework for .NET applications supported by Microsoft. It enables developers to work with data using objects of domain specific classes without focusing on the underlying database tables and columns where this data is stored. With the Entity Framework, developers can work at a higher level of abstraction when they deal with data, and can create and maintain data-oriented applications with less code compared with traditional applications.

The sentence above, perfectly summarise the power of this ORM framework. It supports developers in one of the most tedious tasks.

In this chapter we require three different NuGet packages.

  1. The Database provider
  2. The Entity Framework tools

All the packages below will be installed using the Package Manager Console. To access it, using Visual Studio please go to:

Tools > NuGet Package Manager > Package Manager Console

The database provider

In our example we are going to use SQL (As we have already defined in the previous posts), but there is a list of available Database Provider 

To install our package we will have to type the following command:

Database Provider
  1. Install-Package Microsoft.EntityFrameworkCore.SqlServer

Entity Framework Tools

Entity Framework is powerful by itself, but we also have a set of tools to make it even greater.

The tools can be installed by using the following command in Package Manager Console:

  1. Install-Package Microsoft.EntityFrameworkCore.Tools

Create a migration script

After all of the above steps, creating a migration script is very straight forward.

Using our previously installed packages we can run the following command in the Package Manager Console

Migration Script
  1. Add-Migration InitialMigrationScript


During the above step you may receive an error stating:

Migration Error
  1. The term 'add-migration' is not recognized as the name of a cmdlet

If this error appear you just have to close and reopen visual studio.

The code above will create a migration script called InitialMigrationScript and place in a folder called Migrations.

The file is actually pretty readable and it is useful to analyse to be able to modify and create one in the future without the help of the assistant if necessary.

Run a migration

This is our final step. After which, our table will be fully set and ready to be used within our app.

To create the tables, we have to run the migration file previously created using one of the package installed in the initial steps.

To run the migration run the following command in the Package Manager Console

Create Tables
  1. Update-Database


We are not at the end of this little tutorials. The above steps should have allowed us to create a full migration script that can be used to have a consistent database schemer across environment and to developers plenty of time.

I hope this can be helpful to an of you, as I personally spent some time to be able to put this together and find all the information I needed.

As mentioned above and in my previous post, I am sharing my finding while I use them while I am using this technologies in side projects, and I am more than happy to receive feedback that can help me improve it.