Docker Blog http://www.conflict2k.com/blog Thu, 28 May 2020 00:19:54 +0000 en-US hourly 1 http://wordpress.org/?v=5.3.283253124 Docker Blog http://www.conflict2k.com/blog/dockercon-live-is-here/ Thu, 28 May 2020 15:55:00 +0000 http://www.conflict2k.com/blog/?p=26294 DockerCon LIVE 2020 is about to kick off and there are over 64,000 community members, users and customers registered! Although we miss getting together in person, we’re excited to be able to bring even more people together to learn and share how Docker helps dev teams build great apps. Like DockerCon’s past there is so […]

The post DockerCon LIVE is here! appeared first on Docker Blog.

]]>

DockerCon LIVE 2020 is about to kick off and there are over 64,000 community members, users and customers registered! Although we miss getting together in person, we’re excited to be able to bring even more people together to learn and share how Docker helps dev teams build great apps. Like DockerCon’s past there is so much great content on the agenda for you to learn and expand your expertise around containers and applications.

We’ve been very busy here at Docker and a couple of months ago, we outlined our refocused developer-focused strategy. Since then, we’ve made great progress on executing against it and remain focused on bringing simplicity to app building experience, embracing the ecosystem and helping developers and developer teams bring code to cloud faster and easier than ever before. A few examples:

We hope you can join us today for #DockerCon! There’s lots more code to cloud goodness to come from us, and we can’t wait to see what the community does next with Docker.  

The post DockerCon LIVE is here! appeared first on Docker Blog.

]]>
26294
Docker Blog http://www.conflict2k.com/blog/shortening-the-developer-commute-with-docker-and-microsoft-azure/ Wed, 27 May 2020 16:00:00 +0000 http://www.conflict2k.com/blog/?p=26287 365体育手机投注Do you remember the first time you used Docker? I do. It was about six years ago and like many folks at the time it looked like this: I was not using Redis at the time but it seemed like a complicated enough piece of software to put this new technology through its paces. A […]

The post Shortening the developer commute with Docker and Microsoft Azure appeared first on Docker Blog.

]]>

Do you remember the first time you used Docker? I do. It was about six years ago and like many folks at the time it looked like this:

docker run -it redis

I was not using Redis at the time but it seemed like a complicated enough piece of software to put this new technology through its paces. A quick Docker image pull and it was up and running. It seemed like magic. Shortly after that first conflict2k.command I found my way to conflict2k.compose. At this point I knew how to run Redis and the docs had an example Python Flask application. How hard could it be to put the two together?

version: '3'services:  web:    build: .    ports:      - "5000:5000"  redis:    image: “redis?

I understood immediately how Docker could help me shorten my developer “commute.?All the time I spent doing something else just to get to the work I wanted to be doing. It was awesome! 

As time passed, unfortunately my commute started to get longer again. Maybe I needed to collaborate with a colleague or get more resources then I had locally. Ok, I can run Docker in the cloud, let me see how I can get a Docker Engine. Am I going to use another tool, set up one manually, automate the whole thing? What about updates? Maybe I should use one of the managed container services? Well then I’d have to use a different CLI and perhaps a different file format. That would be totally different from what I’m using locally. Seeing my commute bloat substantially, my team and I began to work on a solution and went on a path to find collaborators with the cloud service providers.  

I am excited to finally be able to talk about the result of a collaborative set of ideas that we’ve been working on for a year to once again shorten your developer commute. Docker is expanding our strategic partnership with Microsoft and integrating the Docker experience you already know and love with Azure Container Instances (ACI). 

What does that mean for you? The same workflow in Docker Desktop and with the Docker CLI and the tools you already have with all the container compute you could want. No infrastructure to manage. No clusters to provision. When it is time to go home, docker rm will stop all the meters. We will be giving an early preview of this work on stage at DockerCon tomorrow; so please register here and watch the keynote. 

Let me give you a sense of how simple the process will be. You will even be able to log into Azure directly from the Docker CLI so you can connect to your Azure account. The login experience will feel very familiar, and odds are you have used it before for other services:

docker login azure


Once you are logged in, you just need to tell Docker that you want to use something besides the local engine. This is my favorite part?it is where in my opinion the magic lives. About a year ago we introduced Docker Context. Originally, it let you switch between engines (local or remote), Swarm, and Kubernetes. When it launched, I thought we needed to make this happen for any service that can run a container. If you want to shorten a developer’s commute, this is the way to do it.

docker context create aci-westus aci --aci-subscription-id xxx --aci-resource-group yyy --aci-location westus

All you need is a set of Azure credentials. If you have an Azure resource group you want to use you will be able to select it or we can create one for you. Once you have your Docker Context, you can tell Docker to switch to using it by default.

docker context use aci-westus

Once you have the context selected, it is just Docker. You can run individual containers. And you can run multiple containers with conflict2k.compose; look at Awesome Compose to find a compose file to try out. Or fire up Visual Studio Code and get back to doing what you wanted to do?writing code. As part of this strategic partnership with Microsoft, we are working closing with the Visual Studio Code teams to make sure the Docker experience is awesome.

Docker and Microsoft’s partnership has a long history. I am proud to be able to talk about what I have been working on for the last year. Together we are working on getting a beta ready for release in the second half of 2020. You can register for the beta here

For more information, check out this blog post by Paul Yuknewicz, Group Product Manager, Azure Developer Tools or read the press release.


If there are other providers you would like to see come on board to bring the simplicity of the Docker experience to the cloud, then please let us know on our public roadmap.  I am looking forward to telling you more about what else we have been working on soon!


The post Shortening the developer commute with Docker and Microsoft Azure appeared first on Docker Blog.

]]>
26287
Docker Blog http://www.conflict2k.com/blog/creating-the-best-linux-development-experience-on-windows-wsl-2/ Wed, 20 May 2020 16:00:00 +0000 http://www.conflict2k.com/blog/?p=26260 We are really excited to have had Docker Desktop be featured in a breakout session titled “The Journey to One .NET?at MSFT Build by @Scott Hanselman  with WSL 2. Earlier in the his  keynote, we learned about the great new enhancements for GPU support for WSL 2 and we want to hear from our […]

The post Creating the best Linux Development experience on Windows & WSL 2 appeared first on Docker Blog.

]]>
We are really excited to have had Docker Desktop be featured in a breakout session titled “The Journey to One .NET?at MSFT Build by @Scott Hanselman  with WSL 2. Earlier in the his  keynote, we learned about the great new enhancements for GPU support for WSL 2 and we want to hear from our community about your interest in adding this functionality to Docker Desktop. If you are eager to see GPU support come to Docker Desktop, please let us know by voting up our roadmap item and feel free to raise any new requests here as well.

With this announcement, the launch of the Windows 2004 release imminently and Docker Desktop v2.3.02 reaching WSL2 GA , we thought this would be a good time to reflect on how we got to where we are today with WSL 2.

April 2019

365体育手机投注Casting our minds back to 2019 (a very different time!), we first discussed WSL 2 with Microsoft in April. We were excited to get started and wanted to find a way to get a build as soon as possible.

May 2019

It turned out the easiest way to do this was to collect a laptop at Kubecon EU (never underestimate the bandwidth of a 747). We brought this back and started work on what would be our first ‘hacky?version of WSL 2 for Docker Desktop.

June 2019

With some internal demo’s done we decided to announce what we were planning <3

365体育手机投注This announcement was a bit like watching a swan on a lake, our blog post was calm and collected, but beneath the water we were kicking madly to take us towards something we could share more widely.

July 2019

We finally got far enough along that we were ready to share something!

And not long after we released our first preview of Docker Desktop using WSL 2

August-September 2019

Once again, with a preview out and things seeming calm we went back to work. We were talking with Microsoft weekly about how we could improve what we had, on fixing bugs and generally on improving the experience. Simon and Ben did have enough time though to head over to the USA to talk to Microsoft about how we were getting on.

October 2019

365体育手机投注We released a major rework to how Docker Desktop would integrate with WSL 2:

365体育手机投注Along with adding K8s support and providing feature parity with our old Hyper-V based back end. We also made the preview more visible in Docker Desktop and our user numbers started to increase

November 2019 – Feb 2020

This time flew by, we spent a lot of this time chasing down bugs, looking at how we could improve the local experience and also what the best ways of working would be:

March 2020

365体育手机投注We had built up a fair bit of confidence in what we had built and finally addressed one of the largest outstanding items we still had in our backlog – we added Windows Home support

This involved us removing the functionality associated with running our old Moby VM in Hyper V and all of the options associated with running Windows containers – as these are not supported on Windows Home. With this we were now able to focus on heading straight to GA…

April 2020

We doubled down how we were getting ready for GA, learning lessons on improving our development practice. We wanted to share how we were preparing and testing WSL 2 ready for the 2.7m people out there running Docker Desktop.

May 2020

We finally reached our GA with Docker Desktop v2.3.02!

Now we are out in the wild, we shared some ideas and best practices to make sure you are getting the best experience out of Docker Desktop when working with WSL 2. 

(And of course for Windows Pro users this still comes with all the same features you know and love including the ability to switch back over to using Windows Containers.)

What’s next?

Next, is that people start to use Docker Desktop with WSL 2! To try out Docker Desktop with WSL 2 today, make sure you are on Windows 2004 or higher and download the latest Docker Desktop to get started.

If you are enjoying  Docker Desktop but have ideas of what we could do to make it better then please give us feedback. You can let us know what features you want to see next via our roadmap, including voting up GPU support for WSL 2. 

The post Creating the best Linux Development experience on Windows & WSL 2 appeared first on Docker Blog.

]]>
26260
Docker Blog http://www.conflict2k.com/blog/announcing-scanning-from-snyk-for-docker/ Tue, 19 May 2020 13:00:00 +0000 http://www.conflict2k.com/blog/?p=26256 We are really excited that Docker and Snyk are now partnering together to engineer container security scanning deeply into Docker Desktop and Docker Hub. Image vulnerability scanning has been one of your most requested items on our public roadmap. Modern software uses a lot of third party open source libraries, indeed this is one of […]

The post Helping You Better Identify Vulnerabilities in Partnership with Snyk appeared first on Docker Blog.

]]>
We are really excited that Docker and Snyk are now partnering together to engineer container security scanning deeply into Docker Desktop and Docker Hub. Image vulnerability scanning has been one of your most requested items on our public roadmap.

Modern software uses a lot of third party open source libraries, indeed this is one of the things that has really raised productivity in coding, as we can reuse work to support new features in our products and to save time in writing implementations of APIs, protocols and algorithms. But this comes with the downside of working out whether there are security vulnerabilities in the code that you are using. You have all told us that scanning is one of the most important roadmap issues for you.

Recall a famously huge data breach from the use of an unpatched version of the Apache Struts library, due to CVE 2017-5638. The CVE was issued in March 2017, and according to the official statement, while the patch should have been applied within 48 hours, it was not, and during May 2017 the websites were hacked, with the attackers having access until late July. This is everyone’s nightmare now. How can we help with this?

Do you know if there are security issues? The joint solution with Snyk and Docker will integrate scanning both on Docker Desktop and in Docker Hub so that developers can quickly check for security issues while they are developing code, in the inner loop, and adding new dependencies, and also the whole team can see vulnerabilities once images are pushed to Docker Hub, the outer loop.

The Snyk scanning will generally provide remediation information for updates that will fix vulnerabilities that are found. You do not have to try to fix all the vulnerabilities all the time, as that is a losing game. There is an ongoing flow of vulnerabilities, and you are always likely to see new ones being added.

The target for your team should be to triage the highest risk issues to see if they apply to you and fix issues with high priority. The Apache Struts vulnerability is an example here, as it provided remote code execution from any server using this framework. These types of vulnerabilities tend to have exploits written quite soon and scripts become available to try to attack them. Other vulnerabilities might not be so critical, as your code may not be configured in a way that makes it vulnerable. If you are unsure better to update sooner though.

For less-critical vulnerabilities, the aim is to make sure that you get fixes updated in your build pipeline and vulnerabilities don’t hang around forever in dependencies that do not get updated. They may not be directly exploitable, but as they accumulate they may allow escalation from another vulnerability or combinations of vulnerable components that may create a larger vulnerability.

As we launch the joint Docker and Snyk scanning features we look forward to helping your team to ship software better, faster and more securely. For more information, check out this blog post by Snyk or read today’s press release

The post Helping You Better Identify Vulnerabilities in Partnership with Snyk appeared first on Docker Blog.

]]>
26256
Docker Blog http://www.conflict2k.com/blog/announcing-the-dockercon-live-container-ecosystem-track/ Thu, 14 May 2020 21:05:26 +0000 http://www.conflict2k.com/blog/?p=26247 With just 2 weeks away from DockerCon LIVE going, LIVE, we are humbled by the tremendous response from almost 50,000 Docker developers and community members, from beginner to expert, who have registered for the event.  DockerCon LIVE would not be complete without our ecosystem of partners who contribute to, and shape, the future of software […]

The post Announcing the DockerCon LIVE Container Ecosystem Track appeared first on Docker Blog.

]]>
With just 2 weeks away from DockerCon LIVE going, LIVE, we are humbled by the tremendous response from almost 50,000 Docker developers and community members, from beginner to expert, who have registered for the event. 

DockerCon LIVE would not be complete without our ecosystem of partners who contribute to, and shape, the future of software development. They will be showcasing their products and solutions, and sharing the best practices they have culminated in working with the best developers and organizations across the globe. 

We are pleased to announce the agenda for our Container Ecosystem Track with sessions built just for devs. In addition to actionable takeaways, their sessions will feature interactive, live Q&A, and so much more. Check out the incredible lineup:

Access Logging Made Easy With Envoy and Fluent Bit ?Carmen Puccio, Principal Solutions Architect | AWS

Docker Desktop + WSL 2 Integration Deep Dive ?Simon Ferquel, Senior Software Developer | Docker | Microsoft

Experience Report: Running a Distributed System Across Kubernetes Clusters ?Chris Seto, Software Engineer | Cockroach Labs

Securing Your Containerized Applications with NGINX ?Kevin Jones, Senior Product Manager | NGINX

You Want To Kubernetes? You MUST Know Docker! ?Angel Rivera, Developer Advocate | CircleCI

Kubernetes at Datadog Scale ?Ara Pulido, Technical Evangelist | Datadog

The Evolution of Tracing Containerized Environments ?Devon Lawler, Head of Sales Engineering | Epsagon

Monitoring in a Microservices World ?Fabian Stäber, Java developer | Instana

Blimp ?conflict2k.compose in the Cloud ?Ethan Jackson, Founder, CEO | Kelda

Tinkertoys, Microservices, and Feature Management: How to Build for the Future ?Heidi Waterhouse, Principal Developer Advocate | LaunchDarkly

Peeking inside your containers and infrastructure ?Mike Elsmore, Developer Advocate | Logz.io

Your Container has Vulnerabilities. Now What? ?Jim Armstrong, Product Marketing Director, Container Security | Snyk

Register for DockerCon LIVE today and we’ll see you in two weeks.

The post Announcing the DockerCon LIVE Container Ecosystem Track appeared first on Docker Blog.

]]>
26247
Docker Blog http://www.conflict2k.com/blog/how-docker-is-partnering-with-the-ecosystem-to-help-dev-teams-build-apps/ Tue, 12 May 2020 16:34:17 +0000 http://www.conflict2k.com/blog/?p=26233 365体育手机投注Back in March, Justin Graham, our VP of Product, wrote about how partnering with the ecosystem is a key part of Docker’s strategy to help developers and development teams get from source code to public cloud runtimes in the easiest, most efficient and cloud-agnostic way. This post will take a brief look at some of […]

The post How Docker is Partnering with the Ecosystem to Help Dev Teams Build Apps appeared first on Docker Blog.

]]>
Back in March, Justin Graham, our VP of Product, wrote about how partnering with the ecosystem is a key part of Docker’s strategy to help developers and development teams get from source code to public cloud runtimes in the easiest, most efficient and cloud-agnostic way. This post will take a brief look at some of the ways that Docker’s approach to partnering has evolved to support this broader refocused company strategy. 

First, to deliver the best experience for developers Docker needs much more seamless integration with Cloud Service Providers (CSPs). Developers are increasingly looking to cloud runtimes for their applications as evidenced by the tremendous growth that the cloud container services have seen. We want to deliver the best developer experience moving forward from local desktop to cloud, and doing that includes tight integration with any and all clouds for cloud-native development. As a first step, we’ve already announced that we are working with AWS, Microsoft and others in the open source community to extend the Compose Specification to more flexibly support cloud-native platforms. You will see us continue to progress our activity in this direction. 

The second piece of Docker’s partnership strategy is offering best in class solutions from around the ecosystem to help developers and development teams build apps faster, easier and more securely. We know that there are tools out there that developers love and rely on in their workflow, and we want those to integrate with their Docker workflow. This makes everyone’s lives easier. Expect to see Docker Hub evolve to become a central point for the ecosystem of developer tools companies to partner with to deliver a more seamless and integrated experience for developers. Imagine your most beloved SaaS tools integrating right into Hub. 

We have been talking with some fantastic partners in the industry and are excited to make some announcements that bring this all to life in the coming weeks. Stay tuned! And if you haven’t already, register now for DockerCon on May 28, 2020 where you’ll learn more about how we’re working with the ecosystem accelerate code to cloud development and hear from some of our great partners.   

The post How Docker is Partnering with the Ecosystem to Help Dev Teams Build Apps appeared first on Docker Blog.

]]>
26233
Docker Blog http://www.conflict2k.com/blog/dockercon-live-2020-captains-on-deck/ Thu, 07 May 2020 15:46:49 +0000 http://www.conflict2k.com/blog/?p=26230 This is a guest post from Docker Captain Bret Fisher, a long-time DevOps sysadmin and speaker who teaches container skills with his popular Docker Mastery courses Docker Mastery, Kubernetes Mastery, Docker for Node.js, and Swarm Mastery, weekly YouTube Live shows. Bret also consults with companies adopting Docker. Join Bret and other Docker Captains at DockerCon […]

The post DockerCon LIVE 2020: Captains on Deck! appeared first on Docker Blog.

]]>
This is a guest post from Docker Captain Bret Fisher, a long-time DevOps sysadmin and speaker who teaches container skills with his popular Docker Mastery courses Docker Mastery, Kubernetes Mastery, Docker for Node.js, and Swarm Mastery, weekly YouTube Live shows. Bret also consults with companies adopting Docker. Join Bret and other Docker Captains at DockerCon LIVE 2020 on May 28th, where they’ll be live all day hanging out, answering questions and having fun. 

When Docker announced in December that it was continuing its DockerCon tradition, albeit virtually, I was super excited and disappointed at the same time. It may sound cliché but truly, my favorite part of attending conferences is seeing old friends and fellow Captains, meeting new people, making new friends, and seeing my students in real life. 

Can a virtual event live up to its in-person version? My friend Phil Estes was honest about his experience on Twitter and I agree?it’s not the same. Online events shouldn’t be one-way information dissemination. As attendees, we should be able to *do* something, not just watch.

Well, challenge accepted. We’ve been working hard for months to pull together a great event for you – and this was before #quarantinelife and knowing ALL events would go virtual this year. Honestly, the more we get into the planning for DockerCon LIVE, the more excited I get. The reach of a virtual event is much broader, and for many, this will be the first DockerCon they will attend.

DockerCon LIVE’s format is a 1-day online event with 3+ simultaneous streams for you to choose from, and it’s not all session talks. Best of all, it’s free for everyone. As of the time I’m writing this, there are more than 36,000 people signed up! 

As part of the jam-packed line-up, I’ll be hosting Captains On Deck365体育手机投注, one of the three co-streaming Channels, where we’ll rotate Docker Captains — 2 per hour — and we’ll hang out, talk tech, and answer your questions real-time. At past DockerCons, Captains frequently hosted very popular Hallway Tracks, and we took what we loved about those events – meeting members of the community, talking shop, answering questions and having a lot of laughs. Captains on Deck was designed to virtualize that experience and make it last all day.

365体育手机投注My friend and fellow Captain, Nirmal Mehta agrees. He said, “One thing I’m super looking forward to since it’s a virtual event, is connecting folks that would not have had a chance to speak or interact with the captains if it was a physical conference.?br>

One thing I can assure you… we’ll have fun! When you’re in the Captains On Deck channel, you’ll see us unscripted, and I’m hoping we’ll get to do some hacking, create some things, troubleshoot stuff, and learn a whole lot with you! It’ll be a similar format to my DevOps and Docker Talk YouTube Live show on Thursdays, except you’ll be driving the show from chat! Every hour we’ll rotate our guests on the stream and take your questions and requests. 

No IRL conference provides the kind of access to speakers and Captains like we’re able to do with DockerCon LIVE. Come join me and the Captains for a fun day of learning together. Hop around between the Captains?channel and the other channels streaming simultaneously:

theCUBE, where you’ll experience live interviews with industry expert speakers

Sessions, where you can attend recorded sessions and chat live with the speakers.

Register for DockerCon and add the Captains on Deck to your calendar. See you on the 28th!

The post DockerCon LIVE 2020: Captains on Deck! appeared first on Docker Blog.

]]>
26230
Docker Blog http://www.conflict2k.com/blog/how-to-build-and-test-your-docker-images-in-the-cloud-with-docker-hub/ Tue, 05 May 2020 15:50:08 +0000 http://www.conflict2k.com/blog/?p=26222 Part 2 in the series on Using Docker Desktop and Docker Hub Together Introduction In part 1 of this series, we took a look at installing Docker Desktop, building images, configuring our builds to use build arguments, running our application in containers, and finally, we took a look at how conflict2k.compose helps in this […]

The post How to Build and Test Your Docker Images in the Cloud with Docker Hub appeared first on Docker Blog.

]]>
Part 2 in the series on Using Docker Desktop and Docker Hub Together

Introduction

In part 1 of this series, we took a look at installing Docker Desktop, building images, configuring our builds to use build arguments, running our application in containers, and finally, we took a look at how conflict2k.compose helps in this process. 

In this article, we’ll walk through deploying our code to the cloud, how to use Docker Hub to build our images when we push to GitHub and how to use Docker Hub to automate running tests.

Docker Hub

365体育手机投注Docker Hub is the easiest way to create, manage, and ship your team’s images to your cloud environments whether on-premises or into a public cloud.

This first thing you will want to do is create a Docker ID, if you do not already have one, and log in to Hub.

Creating Repositories

Once you’re logged in, let’s create a couple of repos where we will push our images to.

365体育手机投注Click on “Repositories?in the main navigation bar and then click the “Create Repository?button at the top of the screen.

You should now see the “Create Repository?screen.

You can create repositories for your account or for an organization. Choose your Docker ID from the dropdown. This will create the repository for your Docker ID.

Now let’s give our repository a name and description. Type projectz-ui in the name field and a short description such as: This is our super awesome UI for the Projectz application. 

We also have the ability to make the repository Public or Private. Let’s keep the repository Public for now.

We can also connect your repository to a source control system. You have the option to choose GitHub or Bitbucket but we’ll be doing this later in the article. So, for now, do not connect to a source control system. 

Go ahead and click the “Create?button to create a new repository.

365体育手机投注Your repository will be created and you will be taken to the General tab of your new repository.

This is the repository screen where we can manage tags, builds, collaborators, webhooks, and visibility settings.

Click on the Tags tab. As expected, we do not have any tags at this time because we have not pushed an image to our repository yet.

We also need a repository for your services application. Follow the previous steps and create a new repository for the projectz-services application. Use the following settings to do so:

Repository name: projectz-services

Description: This is our super awesome services for the Projectz application

Visibility: Public

Build Settings: None

Excellent. We now have two Docker Hub Repositories setup.

Structure Project

365体育手机投注For simplicity in part 1 of this series, we only had one git repository. For this article, I refactored our project and broke them into two different git repositories to more align with today’s microservices world.

Pushing Images

Now let’s build our images and push them to the repos we created above.

Fork Repos

Open your favorite browser and navigate to the pmckeetx/projectz-ui repository.

Create a copy of the repo in your GitHub account by clicking the “Fork?button in the top right corner.

Repeat the processes for the pmckeetx/projectz-svc repository.

Clone the repositories

Open a terminal on your local development machine and navigate to wherever you work on your source code. Let’s create a directory where we will clone our repos and do all our work in.

$ cd ~/projects$ mkdir projectz

Now let’s clone the two repositories you just forked above. Back in your browser click the green “Clone or download?button and copy the URL. Use these URLs to clone the repo to your local machine.

$ git clone http://github.com/[github-id]/projectz-ui.git ui$ git clone http://github.com/[github-id]/projectz-svc.git services

(Remember to substitute your GitHub ID for [github-id] in the above commands)

365体育手机投注If you have SSH keys set up for your github account, you can use the SSH URLs instead.

List local images

Let’s take a look at the list of Docker images we have locally on our machine. Run the following command to see a list of images.

$ docker images

You can see that I have the nginx, projectz-svc, projectz-ui, and node images on my machine. If you do not see the above images, that’s okay, we are going to recreate them now.

Remove local images

Let’s first remove projectz-svc and projectz-ui images. We’ll use the remove image (rmi) command. You can skip this step if you do not have the projectz-svc and projectz-ui on your local machine.

$ docker rmi projectz-svc projectz-ui

If you get the following or similar error: Error response from daemon: conflict: unable to remove repository reference "projectz-svc" (must force) - container 6b1b99cc899c is using its referenced image 6b9eadff19ae

This means that the image you are trying to remove is being used by a container and can not be removed. You need to stop and rm (remove) the container before you can remove the image. To do so, run the following commands.

First, find the running container:

$ docker ps -a

Here we can see that the container named services is using the image projectz-svc which we are trying to remove. 

Let’s stop and remove this container. We can do this at the same time by using the --force option to the rm command. 

If we tried to remove the container by using docker rm services without first stopping it, we would get the following error: Error response from daemon: You cannot remove a running container 6b1b99cc899c. Stop the container before attempting removal or force remove

So we’ll use the --force option to tell Docker to send a SIGKILL to the container and then remove it.

$ docker rm --force services

Do the same for the UI container, if it is still running.

Now that we stopped and removed the containers, we can now remove the images.

$ docker rmi projectz-svc projectz-ui

Let’s list our images again.

$ docker images

Now you should see that the projectz-ui and projectz-services365体育手机投注 images are gone.

Building images

Let’s build our images for the UI and Services projects now. Run the following commands:

$ cd [working dir]/projectz/services$ docker build --tag projectz-svc .$ cd ../ui$ docker build --tag projectz-ui .

If you would like a more in-depth discussion around building images and Dockerfiles, refer back to part 1 of this series.

Pushing images

365体育手机投注Okay, now that we have our images built, let’s take a look at pushing them to Docker Hub.

Tagging images

If you look back at the beginning of the post where we set up our Docker Hub repositories, you’ll see that we created the repositories in our Docker ID namespace. Before we can push our images to Hub, we’ll need to tag them using this namespace.

Open your favorite browser and navigate to Docker Hub and let’s review real quick.

Login to Hub, if you’ve not already done so, and take a look at the dashboard. You should see a list of images. Choose your Docker ID from the dropdown to only show images associated with your Docker ID.

Click on the row for the projectz-ui repository. 

Towards the top right of the window, you should see a conflict2k.command highlighting in grey.

This is the Docker Push command followed by the image name. You’ll see that this command uses your Docker ID followed by a slash followed by the image name and tag, separated by a colon. You can read more about pushing to repositories and tagging images in our documentation.

Let’s tag our local images to match the Docker Hub Repository. Run the following commands anywhere in your terminal.

$ docker tag projectz-ui [dockerid]/projectz-ui:latest$ docker tag projectz-svc [dockerid]/projectz-svc:latest

(Remember to substitute your Docker ID for [dockerid] in the above commands)

Now list your local images and see the newly tagged images.

$ docker images

Pushing

Okay, now that we have our images tagged correctly, let’s push our images to Hub.

The first thing we need to do is make sure we logged into Docker Hub on the terminal. Although the repositories we created earlier are “public? only the owner of the repository can push by default. If you would like to allow folks on your team to be able to push images and manage repositories. Take a look at Organizations and Teams in Hub.

$ docker loginLogin with your Docker ID to push and pull images from Docker Hub...Username:

Enter your username (Docker ID) and password.

Now we can push our images.

$ docker push [dockerid]/projectz-ui:latest$ docker push [dockerid]/projectz-svc:latest

Open your favorite browser and navigate to Docker Hub, select one of the repositories we created earlier and then click the “Tags?tab. You will now see the images and tag we just pushed.

Automatically Build and Test Images

That was pretty straightforward but we had to run a lot of manual commands. What if we wanted to build an image, run tests and publish to a repository so we could deploy our latest changes?

We might be tempted to write a shell script and have everybody on the team run it after they completed a feature. But this wouldn’t be very efficient. 

What we want is a continuous integration (CI) pipeline. Docker Hub provides these features using AutoBuilds and AutoTests

Connecting Source Control

Docker Hub can be connected to GitHub and Bitbucket to listen to push notifications so it can trigger AutoBuilds.

I’ve already connected my Hub account to my GitHub account. To connect your own Hub account to your version control system follow these simple steps in our documentation.

Setup AutoBuilds

Let’s set up AutoBuilds for our two repositories. The steps are the same for both repositories so I’ll only walk you through one of them.

Open Hub in your browser, and navigate to the detail page for the projectz-ui repository.

365体育手机投注Click on the “Builds?tab and then click the “Link to GitHub?button in the middle of the page.

Now in the Build Configuration screen. Select your organization and repository from the dropdowns. Once you select a repository, the screen will expand with more options.

Leave the AUTOTEST setting to Off and the REPOSITORY LINKS to Off also.

The next thing we can configure is Build Rules. Docker Hub automatically configures the first BUILD RULE using the master branch of our repo. But we can configure more.

We have a couple of options we can set for build rules. 

The first is Source Type which can either be a Branch or a Tag. 

Then we can set the Source, this is referring to either the Branch you want to watch or the Tag name you would like to watch. You can enter a string literal or a RegExp that will be used for matching.

Next, we’ll set the Docker Tag that we want to use when the image is built and tagged.

We can also tell Hub what Dockerfile to use and where the Build Context is located.

The next option turns off or on the Build Rule.

365体育手机投注We also have the option to use the Build Cache.

Save and Build

We’ll leave the default Build Rule that Hub added for us. Click the “Save and Build?button.

Our Build options will be saved and an AutoBuild will be kicked off. You can watch this build run on the “Builds?tab of your image page.

To view the build logs, click on the build that is in progress and you will be taken to the build details page where you can view the logs.

Once the build is complete, you can view the newly created image by clicking on the “Tags?tab. There you will see that our image was built and tagged with “latest?

Follow the same steps to set up the projectz-svc repository. 

Trigger a build from Git Push

Now that we see that our image is being built, let’s make a change to our project and trigger it from git push command.

Open the projectz-svc/src/routes.js file in your favorite editor and add the following code snippet anywhere before the module.exports = appRouter line at the bottom of the file.

... appRouter.get( '/services/hello', function( req, res ) { res.json({ code: 'success', payload: 'World' })}) ... module.exports = appRouter

Save the file and commit the changes locally.

$ git commit -am "add hello - world route"

Now, if we push the changes to GitHub, GitHub will trigger a webhook to Docker Hub which will in turn trigger a new build of our image. Let’s do that now.

$ git push

365体育手机投注Navigate over to Hub in your browser and scroll down. You should see that a build was just triggered.

After the build finishes, navigate to the “Tags?tab and see that the image was updated.

Setup AutoTests

Excellent! We now have both our images building when we push to our source control repo. But this is just step one in our CI process. We should only push new images to the repository if all tests pass.

Docker Hub will automatically run tests if you have a conflict2k.compose.test.yml file that defines a sut service. Let’s create this now and run our tests.

Open the projectz-svc project in your editor and create a new file name: conflict2k.compose.test.yml and add the following yaml.

version: "3.6" services: sut:   build:     context: .     args:       NODE_ENV: test   ports:     - "8080:80"   command: npm run test

Commit the changes and push to GitHub.

$ git add conflict2k.compose.test.yml$ git commit -m “add conflict2k.compose.test.yml for hub autotests?$ git push origin master

Now navigate back to Hub and the projectz-svc repo. Once the build finishes, click on the build link and scroll to the bottom of the build logs. There you can see that the tests were run and the image was pushed to the repo.

If the build fails, you will see that the status turns to FAILURE and you will be able to see the error in the build logs.

Conclusion

In part 2 of this series, we showed you how Docker Hub is one of the easiest ways to automatically build your images and run tests without having to use a separate CI system. If you’d like to go further you can take a look at: 

The post How to Build and Test Your Docker Images in the Cloud with Docker Hub appeared first on Docker Blog.

]]>
26222
Docker Blog http://www.conflict2k.com/blog/docker-desktop-wsl-2-best-practices/ Mon, 04 May 2020 16:30:00 +0000 http://www.conflict2k.com/blog/?p=26100 Docker Desktop WSL 2 backend has now been available for a few months for Windows 10 insider users and Microsoft just released WSL 2 on the Release Preview channel (which means GA is very close). We and our early users have accumulated some experience working with it and are excited to share a few best […]

The post Docker Desktop: WSL 2 Best practices appeared first on Docker Blog.

]]>
365体育手机投注Docker Desktop WSL 2 backend has now been available for a few months for Windows 10 insider users and Microsoft just released WSL 2 on the Release Preview channel (which means GA is very close). We and our early users have accumulated some experience working with it and are excited to share a few best practices to implement in your Linux container projects!

Docker Desktop with the WSL 2 backend can be used as before from a Windows terminal. We focused on compatibility to keep you happy with your current development workflow.

But to get the most out of Windows 10 2004 we have some recommendations for you.

Fully embrace WSL 2

The first and most important best practice we want to share, is to fully embrace WSL 2. Your project files should be stored within your WSL 2 distro of choice, you should run the docker CLI from this distro, and you should avoid accessing files stored on the Windows host as much as possible.

365体育手机投注For backward compatibility reasons, we kept the possibility to interact with Docker from the Windows CLI, but it is not the preferred option anymore.

Running docker CLI from WSL will bring you…

Awesome mounts performance

Both your own WSL 2 distro and docker-desktop run on the same utility VM. They share the same Kernel, VFS cache etc. They just run in separate namespaces so that they have the illusion of running totally independently. Docker Desktop leverages that to handle bind mounts from a WSL 2 distro without involving any remote file sharing system. This means that when you mount your project files in a container (with docker run -v ~/my-project:/sources <...>), docker will propagate inotify events and share the same cache as your own distro to avoid reading file content from disk repeatedly.

A little warning though: if you mount files that live in the Windows file system (such as with docker run -v /mnt/c/Users/Simon/windows-project:/sources <...>365体育手机投注), you won’t get those performance benefits, as /mnt/c is actually a mountpoint exposing Windows files through a Plan9 file share. 

Compatibility with Linux toolchains and build scripts

Most reasonably sized projects involving Linux containers come with a bunch of automation scripts. Those scripts are often developed for Linux first (because most of the time, CI/CD pipelines for those projects run on Linux), and developers running on Windows are often considered second-class citizens. They are often using less polished versions of those scripts, and have to deal with subtle behavioral differences.

By fully embracing WSL 2, Windows developers can use the exact same build and automation scripts as on Linux. This means that Windows-specific scripts don’t need to be maintained anymore. Also, that means that you won’t experience issues with different line endings between Windows and Mac/Linux users!

What about my IDE?

If you want an IDE for editing your files, you can do that even if they are hosted within your WSL 2 distro. There are 3 different ways:

  • Use Visual Studio Code Remote to WSL extension

365体育手机投注If your IDE is Visual Studio Code, using Remote to WSL is the best way to continue working on your project. Visual Studio Code architecture is based on a client/server approach where pretty much everything except rendering and input processing is done in a server process, while the UI itself runs in a client process. Remote to WSL leverages that to run the whole server process within WSL while the UI runs as a classic win32 process.

That means that you get the same experience as before, but all your language services, terminals etc. run within WSL.

For more information, see Microsoft’s VS Code Remote to WSL documentation.

  • Point your IDE to your distro network share

365体育手机投注WSL provides a network share for each of your running distros. For example, if I have a project in my Ubuntu distro at `~/wall-e`, I can access it from Windows Explorer (and from any Windows Process) via the special network share `\\wsl$\Ubuntu\home\simon\wall-e`. 

  • Run an X11 server on Windows, and run a Linux native IDE

The setup is a bit more complicated, but you always have the possibility to run an X11 server on Windows (VcXsrv, X410,…) and configure your DISPLAY environment variable such that GUI apps on Linux get rendered properly.

Use BuildKit and multi-stage builds

365体育手机投注Docker Desktop WSL 2 backend has access to all your CPU cores. To leverage this as much as possible (and also to get access to the latest build features), you should enable BuildKit by default.

365体育手机投注The easiest way to do that is to add the following line to your ~/.profile file:

export DOCKER_BUILDKIT=1.

This way, anytime you run docker build, it will run the build with the awesome BuildKit which is capable of running different build stages concurrently.

Use resource limits

Docker Desktop WSL 2 backend can use pretty much all CPU and memory resources on your machine. This is awesome for most cases, but there is a category of workloads where this can cause issues. Indeed, some containers (mainly databases, or caching services) tend to allocate as much memory as they can, and leave other processes (Linux or Win32) starving. Docker provides a way to impose limits in allocatable memory (as well as quotas on CPU usage) by a container. You can find documentation about it here: http://docs.conflict2k.com/config/containers/resource_constraints/.

Reclaim cached memory

365体育手机投注WSL 2 automatically reclaims memory when it is freed, to make it available to Windows processes. However, if the kernel decides to keep content in cache (and with Docker, it tends to happen quite a lot), the amount of memory reclaimed might not be sufficient.

To reclaim more memory, after stopping your containers, you can run echo 1 > /proc/sys/vm/drop_caches as root to drop the kernel page cache and make WSL 2 reclaim memory used by its VM.

What next

We are excited for people to use Docker Desktop with WSL 2 and hope that the tips and tricks in this article will help you get the best performance for all of your workloads. 

If you have another tip or idea you want to share with us for using Docker send us a tweet @docker or if you have feedback on our implementation then raise a ticket against our Github Repo.

The post Docker Desktop: WSL 2 Best practices appeared first on Docker Blog.

]]>
26100
Docker Blog http://www.conflict2k.com/blog/multi-arch-build-and-images-the-simple-way/ Thu, 30 Apr 2020 16:45:00 +0000 http://www.conflict2k.com/blog/?p=26111 “Build once, deploy anywhere?is really nice on the paper but if you want to use ARM targets to reduce your bill, such as Raspberry Pis and AWS A1 instances, or even keep using your old i386 servers, deploying everywhere can become a tricky problem as you need to build your software for these platforms. […]

The post Multi-arch build and images, the simple way appeared first on Docker Blog.

]]>
“Build once, deploy anywhere?is really nice on the paper but if you want to use ARM targets to reduce your bill, such as Raspberry Pis and AWS A1 instances, or even keep using your old i386 servers, deploying everywhere can become a tricky problem as you need to build your software for these platforms. To fix this problem, Docker introduced the principle of multi-arch builds and we’ll see how to use this  and put it into production.

Quick setup

To be able to use the docker manifest365体育手机投注 command, you’ll have to enable the experimental features.

On macOS and Windows, it’s really simple. Open the Preferences > Command Line panel and just enable the experimental features.

On Linux, you’ll have to edit ~/.docker/config.json and restart the engine.

Under the hood

365体育手机投注OK, now we understand why multi-arch images are interesting, but how do we produce them? How do they  work?

Each Docker image is represented by a manifest. A manifest is a JSON file containing all the information about a Docker image. This includes references to each of its layers, their corresponding sizes, the hash of the image, its size and also the platform it’s supposed to work on. This manifest can then be referenced by a tag365体育手机投注 so that it’s easy to find.

For example, if you run the following command, you’ll get the manifest of a non-multi-arch image in the rustlang/rust repository with the nightly-slim tag:

$ docker manifest inspect --verbose rustlang/rust:nightly-slim
{
  "Ref": "docker.io/amd64/rust:1.42-slim-buster",
  "Descriptor": {
    "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
    "digest": "sha256:1bf29985958d1436197c3b507e697fbf1ae99489ea69e59972a30654cdce70cb",
    "size": 742,
    "platform": {
      "architecture": "amd64",
      "os": "linux"
    }
  },
  "SchemaV2Manifest": {
    "schemaVersion": 2,
    "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
    "config": {
      "mediaType": "application/vnd.docker.container.image.v1+json",
      "size": 4830,
      "digest": "sha256:dbeae51214f7ff96fb23481776002739cf29b47bce62ca8ebc5191d9ddcd85ae"
    },
    "layers": [
      {
      "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
      "size": 27091862,
      "digest": "sha256:c499e6d256d6d4a546f1c141e04b5b4951983ba7581e39deaf5cc595289ee70f"
      },
      {
        "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
        "size": 175987238,
        "digest": "sha256:e2f298701fbeb02568c3dcb9822f8488e24ef12f5430bc2e8562016ba8670f0d"
      }
    ]
  }

}

The question is now, how can we put multiple Docker images, each supporting a different architecture, behind the sametag?

What if this manifest file contained a list of manifests, so that the Docker Engine could pick the one that it matches at runtime? That’s exactly how the manifest is built for a multi-arch image. This type of manifest is called a manifest list.

Let’s take a look at a multi-arch image:

$ docker manifest inspect ‐‐verbose rust:1.42-slim-buster
[
  {
    "Ref": "docker.io/library/rust:1.42-slim-buster@sha256:1bf29985958d1436197c3b507e697fbf1ae99489ea69e59972a30654cdce70cb",
    "Descriptor": {
      "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
      "digest": "sha256:1bf29985958d1436197c3b507e697fbf1ae99489ea69e59972a30654cdce70cb",
      "size": 742,
      "platform": {
        "architecture": "amd64",
        "os": "linux"
      }
    },
    "SchemaV2Manifest": { ... }
  },
  {
    "Ref": "docker.io/library/rust:1.42-slim-buster@sha256:116d243c6346c44f3d458e650e8cc4e0b66ae0bcd37897e77f06054a5691c570",
    "Descriptor": {
      "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
      "digest": "sha256:116d243c6346c44f3d458e650e8cc4e0b66ae0bcd37897e77f06054a5691c570",
      "size": 742,
      "platform": {
        "architecture": "arm",
        "os": "linux",
        "variant": "v7"
      }
    },
    "SchemaV2Manifest": { ... }
...
]

365体育手机投注We can see that it’s a simple list of the manifests of all the different images, each with a platform section that can be used by the Docker Engine to match itself to.

How they’re made

There are two ways to use Docker to build a multiarch image: using docker manifest or using docker buildx.

365体育手机投注To demonstrate this, we will need a project to play. We’ll use the following Dockerfile which just results in a Debian based image that includes the curl binary.

ARG ARCH=
FROM ${ARCH}debian:buster-slim

RUN apt-get update \
&& apt-get install -y curl \
&& rm -rf /var/lib/apt/lists/*

ENTRYPOINT [ "curl" ]

Now we are ready to start building our multi-arch image.

The hard way with docker manifest

We’ll start by doing it the hard way with `docker manifest` because it’s the oldest tool made by Docker to build multiarch images.

365体育手机投注To begin our journey, we’ll first need to build and push the images for each architecture to the Docker Hub. We will then combine all these images in a manifest list referenced by a tag.

# AMD64
$ docker build -t your-username/multiarch-example:manifest-amd64 --build-arg ARCH=amd64/ .
$ docker push your-username/multiarch-example:manifest-amd64

# ARM32V7
$ docker build -t your-username/multiarch-example:manifest-arm32v7 --build-arg ARCH=arm32v7/ .
$ docker push your-username/multiarch-example:manifest-arm32v7

# ARM64V8
$ docker build -t your-username/multiarch-example:manifest-arm64v8 --build-arg ARCH=arm64v8/ .
$365体育手机投注 docker push your-username/multiarch-example:manifest-arm64v8

Now that we have built our images and pushed them, we are able to reference them all in a manifest list using the docker manifest command.

$ docker manifest create \
your-username/multiarch-example:manifest-latest \
--amend your-username/multiarch-example:manifest-amd64 \
--amend your-username/multiarch-example:manifest-arm32v7 \
--amend your-username/multiarch-example:manifest-arm64v8

365体育手机投注Once the manifest list has been created, we can push it to Docker Hub.

$365体育手机投注 docker manifest push your-username/multiarch-example:manifest-latest

If you now go to Docker Hub, you’ll be able to see the new tag referencing the images:

The simple way with docker buildx

You should be aware that buildx is still experimental.

If you are on Mac or Windows, you have nothing to worry about, buildx is shipped with Docker Desktop. If you are on linux, you might need to install it by following the documentation here http://github.com/docker/buildx

365体育手机投注The magic of buildx is that the whole above process can be done with a single command.

$ docker buildx build \
--push \
365体育手机投注 --platform linux/arm/v7,linux/arm64/v8,linux/amd64 \ --tag your-username/multiarch-example:buildx-latest .

And that’s it, one command, one tag and multiple images.

Let’s go to production

We’ll now try to target the CI and use GitHub Actions to build a multiarch image and push it to the Hub.

To do so, we’ll write a configuration file that we’ll put in .github/workflows/image.yml of our git repository.

name: build our image

on:
  push:
    branches: master

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - name: checkout code
        uses: actions/checkout@v2
      - name: install buildx
        id: buildx
        uses: crazy-max/ghaction-docker-buildx@v1
        with:
          version: latest
      - name: build the image
      run: |
        docker buildx build \
          --tag your-username/multiarch-example:latest \
          --platform linux/amd64,linux/arm/v7,linux/arm64 .

Thanks to the GitHub Action crazy-max/docker-buildx we can install and configure buildx with only one step.

To be able to push, we now have to get an access token on Docker Hub365体育手机投注 in the security settings.

Once you created it, you’ll have to set it in your repository settings in the Secrets section. We’ll create DOCKER_USERNAME and DOCKER_PASSWORD variables to login afterward.

Now, we can update the GitHub Action configuration file and add the login step before the build. And then, we can add the --push to the buildx command.

...
      - name: login to docker hub
        run: echo "${{ secrets.DOCKER_PASSWORD }}" | docker login -u "${{ secrets.DOCKER_USERNAME }}" --password-stdin
      - name: build the image
        run: |
          docker buildx build --push \
            --tag your-username/multiarch-example:latest \
            --platform linux/amd64,linux/arm/v7,linux/arm64 .

365体育手机投注We now have our image being built and pushed each time something is pushed on master.

Conclusion

This post gives an example of how to build a multiarch Docker image and push it to the Docker Hub. It also showed how to automate this process for git repositories using GitHub Actions; but this can be done from any other CI system too.

An example of building multiarch image on Circle CI, Gitlab CI and Travis can be found here.

The post Multi-arch build and images, the simple way appeared first on Docker Blog.

]]>
26111