blog

How to Create a Database migration with Entity Framework

This article is the continuation of a series started with the first article explaining how to setup SQL server with Entity Framework using .NET CORE.

This post is going to explain how to create a migration script with the use of Entity Framework tools and then we are going to use this migration to create all the tables on our local database.

Entity Framework

Entity Framework is an open-source ORM framework for .NET applications supported by Microsoft. It enables developers to work with data using objects of domain specific classes without focusing on the underlying database tables and columns where this data is stored. With the Entity Framework, developers can work at a higher level of abstraction when they deal with data, and can create and maintain data-oriented applications with less code compared with traditional applications.

The sentence above, perfectly summarise the power of this ORM framework. It supports developers in one of the most tedious tasks.

In this chapter we require three different NuGet packages.

  1. The Database provider
  2. The Entity Framework tools

All the packages below will be installed using the Package Manager Console. To access it, using Visual Studio please go to:

Tools > NuGet Package Manager > Package Manager Console

The database provider

In our example we are going to use SQL (As we have already defined in the previous posts), but there is a list of available Database Provider 

To install our package we will have to type the following command:

Database Provider
 
  1. Install-Package Microsoft.EntityFrameworkCore.SqlServer

Entity Framework Tools

Entity Framework is powerful by itself, but we also have a set of tools to make it even greater.

The tools can be installed by using the following command in Package Manager Console:

EntityFrameoworkCore
 
  1. Install-Package Microsoft.EntityFrameworkCore.Tools

Create a migration script

After all of the above steps, creating a migration script is very straight forward.

Using our previously installed packages we can run the following command in the Package Manager Console

Migration Script
 
  1. Add-Migration InitialMigrationScript

 

During the above step you may receive an error stating:

Migration Error
 
  1. The term 'add-migration' is not recognized as the name of a cmdlet

If this error appear you just have to close and reopen visual studio.

The code above will create a migration script called InitialMigrationScript and place in a folder called Migrations.

The file is actually pretty readable and it is useful to analyse to be able to modify and create one in the future without the help of the assistant if necessary.

Run a migration

This is our final step. After which, our table will be fully set and ready to be used within our app.

To create the tables, we have to run the migration file previously created using one of the package installed in the initial steps.

To run the migration run the following command in the Package Manager Console

Create Tables
 
  1. Update-Database

Conclusion

We are not at the end of this little tutorials. The above steps should have allowed us to create a full migration script that can be used to have a consistent database schemer across environment and to developers plenty of time.

I hope this can be helpful to an of you, as I personally spent some time to be able to put this together and find all the information I needed.

As mentioned above and in my previous post, I am sharing my finding while I use them while I am using this technologies in side projects, and I am more than happy to receive feedback that can help me improve it.

 

How to setup SQL server with Entity Framework using .NET CORE

This article is the first of a series that is aimed to explain how to use familiar technologies with .NET CORE.

The aim of this post is to create a Database from scratch using Entity Framework. We will configure SqlServer to run with our core app and finally touch upon table relationship in Entity Framework.

Disclaimer: This is the first time I use entity Framework and .NET Core, so I just trying to share my “findings” and it is by no mean the “best” solution available. I am more than open to suggestion if I can make the code below faster and cleaner.

Getting Started

To be able to follow along with this article, you will need to have a project set up and ready. If you do not have one, I would suggest you to create a new project (Web Api preferred) by using the newly released Templates available cross platform. To enable this you will need to install the newest core package for visual studio.

In our case we are going to use a newly created Web API build on Core 2.0.

Tables

One of the hardest task to complete, before you can create a full set of tables in a relationship database ( like Sql and MySql) it is to create a full map of the tables. It may seem clean in your head what your app needs, but I greatly suggest you to create a complete map of all the tables, columns and its relationship on paper to make sure that you have thought at everything.

You will be able to change things later on, but database architecture it is very important for the performance of your application.

In our case we are going to create a player table and an inventory table.

The tables will have the following columns

Player

  • (int) PlayerId
  • (string) Name
  • (decimal) health

Inventory

  • (int) InventoryId
  • (int) PlayerId
  • (string) Name

The first step is to create a Model class for each of the above table. It is good practice to have all models together within a folder placed in the Root of our application called Models.

Now that the folder is in place, we can two files, and call them Player.cs and Inventory.cs.

Player.cs
 
namespace My_game
{
    public class Player
    {
        public long PlayerId { get; set; }
        public string Name { get; set; }
        public decimal health { get; set; }
    }
}

The above model is quite simple. If you are familiar with c# at all, you have surely created a file that looked like this in the past. The power Entity Framework will use the above class to create the table and support us in mapping our future database queries.

Now we need to create another file that will include the Inventory class. This is going to be slightly different than the one above, as we this class is expecting to have a “relationship” with the player class, because each player will be able to have many pieces of inventory.

Inventory.cs
 
namespace My_game
{
    public class Inventory
    {
        public long InventoryId { get; set; }
        public decimal Name { get; set; }
        virtual Player Player { get; set; }
    }
}

As shown by the above code, to add a relationship you just need to use a “virtual” entry with the use of the recently created “player” model.

As you may have noticed, we have not specified any unique identified when creating the models, and not any specification on where the relationship of the tables lies.

The magic is in the names. Entity Framework, if not specified, will expect the Unique Identifier to be either called Id, or the class name appended by Id (eg. InventoryId, PlayerId).

Relationship are handled a very similar way, it assume the tables are connected by using its unique identifier, and as with the above case, we are free to change the default ( this will not be covered in this article).

Context

Now that we have a couple of tables in action, it is time to fit them together we need to use System.Data.Entity.DbContext class (often refereed as context). This class is responsible of creating a complete picture of the database.

For this example we are going to create a context file called dbContext.cs. This file will include the models created above and will look like the following snippets:

dbContext.cs
 
namespace My_game.Models
{
    public class dbContext : DbContext
    {
        public dbContext(DbContextOptions<dbContext> options)
            : base(options)
        {
        }
        public DbSet<Player> Player { get; set; }
        public DbSet<Inventory> Inventory { get; set; }
    }
}

The above class is going to be used in the next few chapter with the use of Entity Framework tools to create a migrations script that will eventually create our database tables.

Configure SQL server

Now that all our models and the context have been fully developed, we are ready to configure SQL server. This article is not going to explain how to set up an SQL server and create a Database, and it is out of scope, but plenty or resources can easily be found on this topic.

Assuming that you have a server and a database setup, we will need to add a connection string in our appsettings.json file. This connection string will provide our application with the correct credential to connect to our database.

To add a local server called mssqllocaldb and a database called my_app_db,  a connection string would look something like this:

appsettings.json
 
"ConnectionStrings": {
    "DefaultConnection": "Server=(localdb)\\mssqllocaldb;Database=my_app_db;Trusted_Connection=True;MultipleActiveResultSets=true"
  }

The connection string vary depending from the setting, authentication and location of the server, so the one above has just been shared to give you an idea. You can have multiple connection strings ( for development, and live environment). In our case the connection string is going to be called DefaultConnection.

Now that we have added the above entry in our appsettings.json file, we are ready to link the database to our app.

This time we are going to insert some code in the Startup.cs file. This file includes all the services and configuration that are going to be made available within the app.

Our database connection is going to be a service, and as such, our code is going to be inserted within the ConfigureServices method.

SQL server service configuration
 
            services.AddDbContext<dbContext>(options =>
                options.UseSqlServer(Configuration.GetConnectionString("DefaultConnection")));

Adding the service is quite straight forward, you just need to add the context name that we have previously defined dbContext, and specify the connection string that we want to use to connect to our DB DefaultConnection. This is all we need to be able to then access the DB within our application.

The example above will connect to SQL server, but there are already options within the .net framework to connect to the most used database servers.

Query the database

Now that everything is linked, query the database is going to be very simple. For example the file below shows the code required to do a select statement (get) and an insert statement (add).

Basic Database operations
 
  1. using System;
  2. using System.Collections.Generic;
  3. using System.Linq;
  4. using System.Threading.Tasks;
  5. using Microsoft.AspNetCore.Mvc;
  6. using My_app.Models;
  7. namespace My_app.Controllers
  8. {
  9. [Route("api/[controller]")]
  10. public class dbController : Controller
  11. {
  12. private readonly dbContext _context;
  13. public dbController(dbContext context)
  14. {
  15. _context = context;
  16. }
  17. [HttpGet]
  18. public IEnumerable<Resource> GetPlayer()
  19. {
  20. return _context.Player.ToList();
  21. }
  22. [HttpGet]
  23. public IEnumerable<Resource> AddPlayer(string name)
  24. {
  25. var player = new Player {
  26. Name = name,
  27. health = 100
  28. };
  29. return _context.Player.Add(player);
  30. }
  31. }
  32. }

Conclusion

In the next few articles we are going to explain how to create a migration script and how to use them to create your tables, we will also explain how to set you database to seed our tables with static data on start up.

This article has covered the basic set up required to get you up an running with SQL server on a .NET Core appto enable you to get started. I really hope you find any of the information shared above useful and I am happy to get any comments to support me in tailor the above information for future readers.

 

What is Version Control in simple terms

In the past few months, I have spent my free time trying to support new developers to find their first commercial experience.

Usually I find that most of these candidates are not far off from being technically ready, but what they really lack is knowledge of the “ecosystem” that experienced developers passively use without even considering it important.

Today I am going to focus on Version Control. This post is not going to explain you how to use the tool ( there are lots of fantastic videos and courses our there than can do that for you), but is going to focus in explaining individual technical terms that are sometimes hard to understand without the proper explanation or example.

What is Version Control

Formally started in the 70s under the name of Source Code Control System, is one of the most used tool across team of developers. The actual definition of Version Control is:

Version control systems are a category of software tools that help a software team manage changes to source code over time.*

The main task achieved by version control is to orchestrate changes of files within the project. In simple words Version control software allow multiple developers to work on the same project at the same time, all without problem or possibility of overriding everyone else work.

The main reason why this tool is not very common in new developers is probably because it does not seem useful for a single developer, and people underestimate its hidden powers.

Master, Origin and the simple commands

To explain it further we are going to use some real life example to explain some of the technical terms used by developer. The example we are going to use is the one of a cloud storage like Dropbox or Google drive. Many of us use them on  day to day bases, and comparing Version Control to it should help in understanding some of the technical aspects and words used.

Origin

When working on a Project, all developers will have a copy of the project locally on their drive, but ORIGIN is a shared copy that holds all the information and is the true copy of the project. If you take the example of google drive, this will be the folder stored in the cloud, that all users of that google drive account can access from their browsers.

Fetch

This is one of the most simple command in Version Control. This will allow a developer to update all the History of the files. This command does not actually changes any file locally, it is just used to show a developer is something is changed in Origin and inform the developer if the version of the files he is working on are up to date or need to be updated.

Using again the example of Google Drive, this happen every time a computer is switched on when the computed Synchronise with the Cloud folder, to see if anyone has changed any file.

Pull

The fetch command is sometimes omitted by developer, that tent to Update the file locally directly using the “pull” command. Issuing a “pull” command will behind the scene “fetch” all the information from “origin” and then it will download all the required changes on the developer machine to make sure that the project files are up to date. A succesful “pull” will mean that your local project files are the same of ORIGIN

Bringing back our simple example above, the Synchronisation of file is usually followed by this file behind downloaded and updated locally and this is what a “pull” does.

Push

This is the last of the simple and most used command. All the above commands are used to grab information from origin, but “push” is used to update ORIGIN with local changes that we have done to files within the project.

A successful push will mean that your copy of the project and Origin are the same. It is important to note, that after someone “push” the code to origin, all other developer need to “pull” to have an up to date version of the code.

Master, Branches, Merges and Merge Conflict

Now that we have covered the simple commands, it is time to start and move to something a bit more complicated. In this art of the article, we are going to build a car to support our thinking and hopefully help explain this hard commands.

Branches

In small small projects all developers will work on the default Branch called Master. This will be the “only” representation of the project. But it is very common for developers to use Multiple branches within the project. Branches are usually used to develop feature, updates that a developer does not want to be part of master.

Merge

Depending form the outcome of our Branch work, we may need to combine the new feature with the main work “master” or in another “branch”. To do so, we need to use a command called “merge”. We usually merge a branch “into” another. More details will be given below with our example.

Merge Conflict

Unfortunately, not always everything ends well. Sometimes when Merging branches we may end up in conflict. This conflict occur when more than one developer have worked on the same file ( more precisely the same line) and the Version Control Software does not know how to deal with the changes. This situation is quite tricky and we will try to cover it in more details with the example below.

The production Line example

We are the proud owners of a single car production line. This establishment is called “master” in which we build our first version of the car.

We are now going to Experiment on the car to create an update on our current model. We take the finished car (pull from master) and we create a specific “branch” in the factory to add a feature to the car, a dashcam. After this feature has been added in my branch, and the engineer are happy with the change we “merge” our work back in the main production line (master). After doing so, every car will now come with a dashcam.

We are now going to carry two experiment at the same time. As previously done, we are going to get the current model form the production line and we are opening two new branches. Branch1 and Branch2.

In branch one we are going to add a new feature, Automatic mirror, in the other Branch, we are going to work on a separate feature called “dark mirror”.

After some time spent in Branch one, we have decided to merge the work back into the main product line. As with our previous example, there is no issue because no one has worked on any feature on this car that affected the mirror ( or at least not in master).

Development on the second branch has now completed and we want to merge the second branch in master. Unfortunately, when trying to merge this new feature in the main production line, we have a “merge conflict”. There are two parts on the same production line that try to change the mirror. One wants to add automatic mirrors and the other wants to make it black. To solve this issue the engineer need to manually decide which feature he wants to keep between the different branches. In our case the Engineer has decided to keep both and will solve the conflict by adding both feature in the main production line.

 

Conclusion

In the above article I have just touched slightly some of the most used commands and terms that are use when using a version control. Using a Version Control software is like driving a car, initially it seems impossible, and everything is extremely hard, but then it becomes muscle memory and it is straight forward.

Unfortunately for me, this is the case now, and it is extremely hard trying to put even simple task in words. I thought this article would be easy for me, but it turned out to be one of the hardest.

If any of you know how to explain this in better terms or has comment of question please do not hesitate to comment below and I will happily edit the article to make it better.

References

*  Version Control definition from website: https://www.atlassian.com/git/tutorials/what-is-version-control

 

 

 

 

 

Write cleaner Javascript code with Eslint

Javascript has a bad reputation, and this is mainly due to the fact that it is too flexible to use. Many users would abuse this great feature of this language writing code that is very inconsistent and hard to follow. For example, simple inconsistency like single or double quote around strings, can make project seem quite unclear and messy.

In recent years, after I have been exposed to different languages that follow a more rigid approach to coding standards, I have started to focus to write cleaner Javascript, and I would like to share some of these experiences.

JavaScript, is especially prone to developer error due to its lack of a compilation process. Linting tools allow developers to discover problems with their JavaScript code without the need of executing it. In this article I will be going into details on how you can  use and configure a lint utility to support you.

What is Eslint

Eslint is a pluggable linting utility created by Nicholas Zokas in June 2013, that can be used to analyse our Javascript code to highlight errors on the fly, supporting a quick and clean development.

There are many linters around, but I am listing below some of the reasons that have supported my decision to choose Eslint:

  • It is open source.
  • Great documentation.
  • Huge number of configurations
  • Good examples
  • Works with all major IDE and Code editors.
  • Can be used from the Command Line
  • Works on Build Server

Getting started with Eslint

Eslint requires Node,js to be installed on your machine, so if you do not have it, please download the latest stable release from the main website.

Install

After Node.js is up and running, we can move to the next step by opening a Command Prompt and installing the Eslint package from NPM (visit the NPM website for more information regarding its usage).

Install eslint package globally
 
  1. npm i -g eslint

Configure

Following the installation above, we now have to define a configuration file called .eslintrc.json. Eslint comes with an handy feature to support you in creating this file. To trigger it, access the root location of your site and type the following command

Create eslint config file
 
  1. eslint --init

This command will return three options:

  1. Ask a couple of questions to define the configuration required
  2. Use a popular styleguide ( this will require an existing NPM package)
  3. Inspect your Javascript Files (not suggested if your coding style is inconsistent)

The above will be displayed in the command prompt like shown by the image below:

eslint --init options

For the purpose of this blog post, I am going to provide my own configuration file that need to be extracted in the root of your site. The file can be downloaded by clicking the following link:

Eslint Configuration File

Run the linter

After successfully complete the above steps, you will now be able to use Eslint by using its extensive CLI (command line interface) commands or within most of the major IDE (mode details to follow). The most basic command would just include the file or path to test, for example the following two example will respectively test file.js and all files within the “site” folder

eslint run
 
  1. eslint "c:/site/file.js"
  2. eslint "c:/site"

Configuration File explained

Now that we have Eslint fully configured, it is time to dive into the configuration file to see how a configuration file is structured, and what configurations are set in our setup. As mentioned above, there are hundreds of configurations, all well documented on the Eslint website.

Root & Environments

Section one
 
"root": true, //this will set this file and folder as the root directoty
    "env": {//load common environment/plugin configuration
        "browser": true,
        "commonjs": true,
        "es6": true,
        "node": true,
        "jquery": true
    }

The first section of our configuration file includes the root setting. This is an optional setting that define the root of your project. Projects can have more than one Eslint configuration file, and when running it, it will search in the current folder and all its ancestors until it finds the root value.

The next setting used is the env (environment). Using this setting will load specific environment setting. For example in the case above we are allowing the use of ES6 notation, therefore we would not have any error triggered if using the arrow notation =>.

Globals

Globals eslint
 
"globals": {
        "exampleGlobalVariableName": true
    }

The global configuration accept a object that will determine global variables that we can use within our JS files, without the need to be declared. Failing to do so, will trigger a no-undef error.

For example, when we added jQuery in the env setting above, has indirectly set a global for $ and jQuery variables. This setting is really useful to support the user to understand unexpected dependencies that you may have, that will result in hard unit testing.

Rules

Most of the configuration have three simple settings.

  • off – This configuration is self explanatory. It will switch the error off.
  • warn – This setting will trigger a Warning error. In many IDE warning are displayed with a Yellow underline.
  • error – This setting will return an error. A file could have some warning, but you should always aim for “error free” files.

Some specific rules have more arguments that can be used, but I will be wasting time trying to explain them, as the Eslint documentation is really complete.

Using Code Editors

Eslint CLI is fantastic, but I do not expect you to run it every single time you modify a file, and even if the linter gives you the line number for it, it would still be hard to use on a daily basis.

Luckily for us, Eslint is very easy to set up in most of the major code editors. In this section of the article, I am going to show how you install the linted on Visual Studio Code, that is my editor of choice.

Visual Studio Code

To use Eslint in the editor, we need to first install the official extension called ESlint. To access the extension page click the Extension button on the left menu or click Ctrl+ Shift + X (on windows).

After installing the above extension, you will be required to restart the editor for the effect to take place.

On restart, assuming your Eslint configuration file is in the correct folder, you will be able to immediately see errors highlighted in real time.

The screen shots below show some of the feature available when enabling the extension.

 

Conclusion

I have personally doubt the need for a tool like this, but now after using it, I cannot live without. The configuration that I have shared is a personal choice and I am more than happy for you to use it if you wish to, but at the same time, you should also dive into the available rules and see if there is anything that fit your guidelines.

Start to use a linter is just the first step to write cleaner code. No matter if you are a single developer or work in a big team. Introducing this tool will increase the quality and readability of your code. We have all been guilty of being sometimes lazy to keep our project consistent and lacking structure, but now the tools are available and we have no excuse but to start and write cleaner Javascript.

 

A look at the past and a thought for the future

This year it has personally been a roller-coaster. I have started with a new position that resulted in many new challenges. I have been defining the company “standards”, working hard to please has many people around the company, while still defining something that was strong and easy to pick up, and finally  I have managed to get outside of my conference zone by having an article published on net magazine and speaking at DDDnorth.

If anyone in December 2016 would have told me that I was going to accomplish all the points above in just one year, I would have laughed. When I look back at each individual point above, and remember the effort needed to achieve them, I am amazed to realise that all fit in just 12 months, and today I want to share some of the lesson that I have learned, plus thank some of the people that have really supported me in this amazing year.

New position =  new challenges

great responsibility quote

With great power comes great responsibilityPeter Parker

You do not need to know the future, to realise that a new position will bring new challenges ahead. Sometime, the challenges can actually be unexpected and may require a swift in the way you are used to work to adapt to it.

I am part of a Tech company, and as many company in the same industry, moving up the ladder, actually means detaching from the “development” environment, to be more involved in the “Delivery/Process” day to day activities.

Many developers usually do this move too early without actually being aware of the consequences of it to the day to day activities. I have personally thought really hard before accepting the position and I have listed the main question that you should ask yourself before moving away from your beloved developer position.

  • Do you have the right skillset
    • Sometimes people are promoted because of their technical skills, but when moving up you may require other abilities (communication, managerial, coaching). It is very common to see amazing developer fail in managerial role.
  • Do you want to use your “social” skills
    • It is a fact, Developer are not the most social individual, and in many cases, having a promotion, will mean having to attend more meeting and having to deal with more people that are not developer.. are you ready for the challenge.
  • Are you ready to stop coding
    • Being a developer is very hard. The industry is so quick to change that even just an year away from the keyboard would be hard to regain. Make sure that if you make this step you are actually ready to leave Visual studio behind until retirement, otherwise moving up the ladder may not be the right decision for you.
  • Are you sure taht you will be as happy as you are in your current position?
    • The one think I learned from developers is that we do our job because we LOVE it. So before you make any decision, try to think about your happiness and not just about the pay-rise.

I have gone through the above questions many times before feeling ready. It is very important that you do not take decision just driven by money and think about your happiness. Remember that we spend more time in work that with out loved one, so choosing the right position is like finding a wife ( don’t tell my wife).

There is no right way to make changes

Change is never easy. It is very rare to make some big change and make everyone happy, and I found myself in the deep end before I realised it.

As stated above, I have decided to create some standards to be used across the company with the aim of improving the actual code quality (coding standard, eslint implementation and Unit tests).

It was a genuine thought and I would have never thought that it would bring so many discussion and create such a storm within the teams ( I could see why no one tried).

If I would go back, I would probably change the approach used to tackle the situation, and I am going to list below a couple of suggestion that could help you achieving your goals.

  • Create a “contract” in advance
    • Make sure everyone involved is aware of the way things will be decided in advance (For example, if 51% of developers want this we go ahead), defining this too late could create more trouble.
  • Make allies
    • It is very important to have some allies, when you want to make changes that could take time and energy. You need to be able to have someone on your side, be prepared.
  • Use Facts not words
    • This may not always be possible, but it is very useful to have ability to reference other source and not just give personal opinion. This will help to make the discussions less personal.
  • Solutions, not problems
    • It is extremely common to have people against that disagree on the proposed change. This is accepted as long as people bring a solution to the problem, and do not just complain for the sake of it.
  • Accept defeat
    • This is probably the hardest one, but sometime you need to be ready to accept defeat. And as with the “contract” mentioned above, this need to be defined at the start (give yourself a deadline or a tangible end).

It has not been easy to learn the points above, and I would have loved for someone to share them with me earlier, but I now try to use them in my day to day routine and they result are incredible.

Don’t fear the unknown

Publishing an article on a international newspaper and speaking at a conference is probably what made 2017 so special. The idea of getting into this “unknown world” was so scary. I was not aware of the amount of time that goes behind a simple 400 words article, or the hours of rehearse necessary to feel “ready” to speak at a conference.

If you are wondering if it was easier that it look, unfortunately you will be disappointed to actually learn that it was way harder that I would have anticipated. Getting ready for the conference and writing the article was like a never ending cycle. I was creating something, making it work, sharing it to friends and family, and then do it again, over and over again.

But there is a good part in the story. In fact, even if it drained me for every single resource of energy I had, it was completely worth it! The feeling of seeing your face on the newspaper or waking up in the morning and checking twitter to find people referencing your article from the other side of the globe, or receiving great feedback after a speaking session cannot be explained in words.

I was amazed to see the great effect that the above experiences had on my day to day job and career. It is not easy, but I really suggest you all to try something that is way outside your conference zone. You will be amazed and it could actually have unexpected consequences.

2018…

 

I do not know if 2018 could actually be more exciting that the one that just passed. But I am sure to be ready for any challenge, and will surely be looking up to find some more that will push me outside my conference zone.

I always had a very good attitude in work, but the one think that experience is teaching me, is that it is really important to always push your limits. It is perfectly fine to fail sometimes, and not trying is a failure on its own. I am always amazed to see the effect of a simple accomplishment on myself and also on my colleagues.

Before waving goodbye to this fantastic year, I need to write some thanks to my amazing colleagues that have supported me all the way, to my blog readers that are growing everyday, the fantastic coding blocks podcast and slack community and of course to my fantastic wife that is always so supportive and knows how to cheer me up when things do not go to plan.

Buon Natale to everyone! See you next year!

 

 

"