ALB Listener Rule with Terraform

Terraform is one of the heavily-used infrastructure tool in my daily work recently. It allows us to write the wireframe of the cloud infrastructure we use by simple configuration language called HCL. Thanks to that, we can safely modify the underlying infrastructure and quickly track the history of the change. Therefore, I’d like to collect some knowledge about the usage of Terraform based on the actual use cases.

Today, I’m going to show you how to construct the application load balancer in AWS with Terraform. That is what I did to prepare the load balancer running in front of our service.

Create ALB

First, we need to create the ALB itself. aws_lb resource will form the ALB as follows.

locals {
  this_alb_name = "myalb"
  redirect_to   = ""

resource "aws_lb" "myalb" {
  name               = "${local.this_alb_name}"
  internal           = false
  load_balancer_type = "application"
  security_groups    = ["${}"]
  subnets            = ["${aws_subnet.public.*.id}"]

  access_logs {
    bucket  = "${aws_s3_bucket.lb_logs.bucket}"
    prefix  = "${local.this_alb_name}"
    enabled = true

Create Listener

Next, we can attach a listener to the ALB we have created. It is necessary to get the ARN of the previous ALB for aws_lb_listener resource. Let’s use the data source for retrieving the ARN this time.

data "aws_lb" "myalb" {
  name = "${local.this_alb_name}"

resource "aws_lb_listener" "mylistener" {
  load_balancer_arn = "${data.aws_lb.myalb.arn}"
  port              = "80"
  protocol          = "HTTP"

  default_action {
    type = "redirect"

    redirect {
      host        = "${local.redirect_to}"
      port        = "80"
      protocol    = "HTTP"
      status_code = "HTTP_301"

Note that this listener has a default action. This action returns a 301 response with the redirection to the specific location by local.redirect_to. If no other actions are matched, the default action will be taken.

Add Listener Rule

Lastly, you can add your custom rules as you like with aws_lb_listener_rule. We can get the ARN of the listener without using the data source if the listener is created in the same Terraform configuration.

resource "aws_lb_listener_rule" "redirect_to_cdp_bi" {
  listener_arn = "${aws_lb_listener.mylistener.arn}"
  priority     = 100

  action {
    type             = "forward"
    target_group_arn = "${local.this_tg.arn}"

  condition {
    path_pattern {
      values = ["/forward_to/*"]

The final diagram can look like this.

ALB Listener Rules

All requests matching with the path /forward_to/* are routed to the target group this_tg. The others go to the host

The best thing about using Terraform is that we can do that in a reproducible manner. Once the Terraform configuration is written, we can get the same resource by just applying it.


Remove Msgstars from iPhone Calendar

These days, I have started receiving a new type of spam on my iPhone calendar. I have found several unrecognized schedules on my calendar. Moreover, the number of schedules is increasing day by day.

According to this article, that is called Msgstars. Unless we click the link attached in the schedule, it’s harmless. Even it’s possible to disable the notification altogether. It seems not so noisy at first glance.

But it’s too annoying to ignore it because my calendar was filled up with this spam schedules. My original schedules are concealed completely. How can we remove the existing spam schedules from my calendar?

Delete the Calendar Subscription

The reason why we see the new schedules continuously is the subscription of the external calendar. Somehow we may subscribe to the calendar accidentally. The deletion of the subscribed calendars will resolve the problem for sure. You can find the configuration to remove the calendar by typing password in the search window.

Setting to remove calendar

Password & Accounts will provide the list of all subscribed calendars. You can remove them one by one. The schedules from Msgstars immediately disappear.

I hope this article is helpful for your case too!

How to inject Jersey Resource in Dropwizard with Dagger

Dependency Inject (DI) is one of the most notable practices to create reliable and high-quality software. This effort enables us to keep the extensibility without losing readability and testability. You may have encountered a situation where you would want to replace any objects in the software flexibly like me. Many frameworks or libraries are allowing us to make use of the dependency injection in our software project. In my case, I would like to use Dagger in our web application using Dropwizard. But I was ignorant of what Dagger was and how to use it in our Dropwizard project. Hence, this article is for writing down the process to get started with Dagger in your web application using Dropwizard.

What is Dagger

First of all, what is Dagger? Dagger is a Java-based dependency injection library originally invented by Square. For now, it’s mainly maintained by Google as an open-source project.


You might hear about Guice before, which is also maintained by Google. It has a more extended history than Dagger. Despite that, Dagger has a more significant number of stars in its GitHub repository. Why is Dagger more popular than Guice? There are several reasons from my perspective.

  • Dagger is compiling time DI library, while Guice’s injection happens at runtime
  • Guice often causes challenging error to solve relating to its reflection usage
  • Dagger provides more simple APIs to use
  • Dagger has notable use cases due to the adoption in the Android development

Therefore, I try to use Dagger in our web application this time.

How to integrate Dagger in Dropwizard project

What I’m going to do is integrate Dagger in a Dropwizard project to inject Jersey resources flexibly. Before going deeper into this goal, we need to be familiar with some Dagger terminologies.

  • Module: Has associations between the interface and actual injected objects.
  • Component: Constructs a whole graph resolving the dependencies of injected objects

Unlike Guice, what I’ve found is that we needed to construct one more class called Component. The component is a sort of highest level class managing all objects injected by Dagger. Therefore, all objects should be injected from the component.

In our case, we will create WebResourceModule for the module and WebappComponent for the component.

The client of the injected class can use the javax.inject.Inject annotation. Constructor injection or field injection is recommended in Dagger.

class UserResource {
  public UserResource(UserConfig userConfig) {
    // Used for the user resource specific configuration
    this.config = config;

We are going to inject UserConfig as we like by using Dagger.

The dagger library can be imported with the following code in build.gradle.

dependencies {
  implementation ''
  annotationProcessor ''

Module and Component

First, we define the module to illustrate how to construct the target UserConfig class.

import dagger.Module;
import dagger.Provides;

public class WebResourceModule {
    private final UserConfig;

    public WebResourceModule(Configuration configuration) {
        this.userConfig = configuration.getUserConfig();

    UserConfig provideUserConfig() {
      return this.userConfig;

@Provides annotation lets the compiler know how to construct the class at the compile time. Therefore, all classes in the application use the UserConfig constructed by the method, provideUserConfig. Next, we can create a module class for building the whole dependency graph.

import dagger.Component;

@Component(modules = {WebResourceModule.class})
public interface WebappComponent {
    UserResource getUserResource();

The argument of @Component annotation specifies the modules knowing how to construct the injected objects. All WebappComponent interface needs to provide is the method to build the object we finally want to get. In this case, the web resource which will be registered into the Dropwizard later. That’s all that we must do with Dagger.

But here comes one question. Who creates the instance of WebappComponent? The answer is Dagger. Dagger generates a class prefixed by Dagger. In this case, DaggerWebappComponent will be created to construct the UserResource from it. Additionally, it provides us a way to bind a module at runtime.

Dropwizard Application

In the Dropwizard, we will get the component class to get the UserResource and register it as a jersey resource.

public class Application extends io.dropwizard.Application<Configuration> {

  public static void main(String[] args) throws Exception {
    new Application().run(args);

  public void run(@NotNull Configuration configuration, Environment environment) {
    // Bind the module to inject the user configuration
    // All objects dependent on the UserConfig can change the behavior without rewriting them.
    WebappComponent component = DaggerWebappComponent.builder()
                                new WebResourceModule(configuration)


DaggerWebappComponent has a builder interface to bind the module at runtime. By changing the module here, we can change the behavior. For the test purpose, we can write a component like this.

WebappComponent testComponent = DaggerWebappComponent.builder()
                            new TestWebResourceModule(configuration)

UserResource testUserResource = testComponent.getUserResource();

It obviously helps us write more testable code.

Wrap Up

As we saw now, using a Dagger looks easy. Dagger enabled me to write more maintainable code without learning many things. Its simple APIs significantly reduce the trouble and burden to employ the DI framework in our software projects. Let’s try to use Dagger in your Dropwizard project as well!

Why we should avoid default_scope in Rails

ActiveRecord in Rails provides a way called scope to keep the readability along with encapsulating the detail of the business logic in the model class. It enables us to add a more intuitive interface to the model so that we can quickly call the scoped method without caring about the complicated underlying implementation. This also contributes to achieving the well-known good practice in the MVC model, “Fat Model, Skinny Controller”. It shows us the clear guidance saying, “We should not write non-response related logic in the controller”. If you are writing a complicated logic that is not directly related to the HTTP response construction response, that should go to the model, not controller. scope methods are helpful to materialize this goal.

What is default_scope?

As part of the scope feature, ActiveModel has a default_scope which defines the scope method applied to all queries on the model. Let’s say we have a User model as follows.

class User < ActiveRecord::Base

User.all returns all users as it states. But what if you want to get the users excluding all hidden users. The following code will return the results as you expected.

User.where(hidden: false)

But default_scope will provide a more convincing manner.

class User < ActiveRecord::Base
  default_scope { where(hidden: false) }

This default_scope is always applied to the model query. In other words, you do not need to specify the query explicitly anymore.

User.all # It will return the visible users, excluding hidden ones.

That is good. You do not need to specify the same where conditions many times. default_scope automatically creates the basis of all queries.

Practically, default_scope is often not recommended in Rails.

Implicit Behavior Change

Based on my experience, the biggest problem of the default_scope is applied implicitly. If the writer of the default_scope is different from the model user, the behavior must look weird. Model users will see a query they do not write unexpectedly. Implicit behavior change is generally anti-pattern. (In Scala, even the compiler shows the warning for the implicit type conversion.).

In my case, I have developed one API using the model class, which is derived from the original web application. Since the data source is shared with them, it is useful to share the model class too. But it brings unexpected pitfall caused by default_scope. At some time, another developer introduced the following default_scope.

class OriginalClass < ActiveRecord::Base
  default_scope { select(all_columns) }

An application I have developed is using the class. What I want here are only c1, c2, and c3. Returning all columns can cause the problem.

OrignalClass.where("c1 = xxx").select("c1, c2, c3")

As you imagine, introducing the default_scope here makes it happen. Without any notice, all columns are returned because I do not know the change around the default behavior of the OriginalClass.

Implicit behavior change is always requiring intensive care. All developers touching the codebase and related repository need to be careful of the transformation of the behavior. But we must not expect all members to do so. It’s unrealistic.

Use scope, not default_scope

Here is a simple answer. Use scope, not default_scope. What we want to do was completely achieved by scope. There was no special reason to use default_scope.

class OriginalClass < ActiveRecord::Base
  scope, get_all_columns -> { select(all_columns) }

Using scope does not break any user codebase implicitly. If a user wants to make use of this new scope, call it explicitly. Of course, default_scope can reduce the amount of code you need to write in terms of the number of characters. But the damage and maintenance cost will surpass the benefit obtained by the default_scope. Simply obeying the following guidance will lead you to keep the Rails code clean and more maintainable.

Use scope, not default_scope

Thanks for reading!


Google Keep for TODO List

When you get a chance to learn a new programming language or framework, you might encounter the exercise to develop a TODO app. The reason behind this kind of exercise is that the TODO app generally covers all functionality most web/mobile app needs, such as user identity management, data persistence, and presentation rendering. Still, the TODO app lets us feel familiar. Everyone can understand the specification of the TODO app at a glance without much prior knowledge. TODO app is one of the applications we frequently use day by day.

But finding the best TODO app was not an easy task for me in the real world. I tried several TODO apps for my work/personal life.

They did not work for me so much due to the following reasons.

  • They are too complicated. In other words, it has too many functionalities. I love the simple one.
  • Some sort of management for tasks is still necessary. Putting all tasks in flat space is not useful for searching.

Although Todoist, Wunderlist, Evernote provide me many features, most of them are not necessary to me. The note is a simple note application so that we cannot organize the memos in order.

Is there some TODO app satisfying these requirements?

Google Keep


I found one of my colleagues using Google Keep as a TODO app. Google keeps an application providing Post-It like user interface. We can maintain any resource (e.g., picture, text, link) there quickly. It’s more like a simple memo application. But the notable thing of Google Keep is also providing the fine-grained search feature (as most of Google product does). We can search by text, labels we attached, and colors specifying the type of the note. Of course, Google provides mobile apps for Google Keep.

I like Google Keep because it achieves the right balance between high functionality and simpleness. It’s not designed purely as a TODO app. Thanks to that fact, Google Keep is probably the handiest TODO app.

You would not get lost how to use Google Keep. It’s easy to understand the full functionality. I’ll pursue the way to use Google Keep as a better TODO app furthermore.


Image by Markus Winkler from Pixabay