Welcome to the new website! The site offers a lot more in the form of blogs and sustainability content. Please bear with us while we polish and update things in the coming weeks.

JVM and Cloud-Native Software Development with a green heart

Craftsmanship

At Yoink we value our diverse skills and insights to help us build better software. We build and maintain reliable software for medium and large enterprises, using modern technologies and methodologies.

Sharing Knowledge

Knowledge multiplies when you share it and we love doing that. We provide training courses, workshops and regularly speak at conferences.

Having fun

We believe that work can and should be fun! We're committed to creating an energizing, engaging atmosphere where we tackle projects with excitement and enthusiasm.

Knowledge Sharing

At Yoink, we make it our business to learn about the methods and technologies that help us build better software, better teams and a better future. We believe the ultimate step in a learning journey is to pass it on to others. Knowledge sharing is done through trainings, meetups and talks at conferences. There are many great talks available from Yoinkees:

The Case Against Frameworks

Jan-Hendrik Kuperus at JFall 2023 with The Case Against Frameworks talk

Another day, another silver bullet. The world of software development changes so rapidly and with every new framework, there will be those that claim it to be the solution to all your problems.

Watch the talk...

Master Your Tools

Jan-Hendrik Kuperus at JFall 2019 with the Master Your Tools talk

A laptop is to a developer what a toolbox is to a carpenter. This talk showcases ways to speed up a dev's workflow and inspires you to learn about the tools at your finger tips.

Watch the talk...

Thought Leadership

The other way we share our thoughts and experiences are through articles on our blog. These range from technical how to's to lessons learned from projects and opinionated essays. Below is an excerpt of the most recent post. You can find more articles on our blog.

Running a local LLM with Ollama

by Jan Ouwens

It’s January 2024 as I write this; I fully expect this post to be out of date by tomorrow, or even sooner. But I think this is exciting!

Running a Large Language Model, or LLM, or AI assistant locally always seemed like something that only the really dedicated hobbyists could do. It seemed to require lots of manual build steps and complicated tinkering to get something working. This is no longer true.

... continue reading