When to use describe/context/it in RSpec

The well-structured test suite helps check the necessary cases are covered at a glance. Developers looking into the code, later on, can quickly grasp which case they should add or modify. The famous unit testing framework provides us a way to organize the test cases in that manner.

RSpec is a de facto standard testing framework used in many Ruby projects. Although I have used RSpec in some projects, I did not fully understand how to describe, context, and it keyword correctly. These keywords are used just for representing the meaningless nested structure in my case. But that does not sound nice. Using these keywords properly leads us to inject an understandable form to the unit test written in RSpec. This article summarizes what we should think in writing RSpec test cases in terms of describe, context, and it use.

describe: Target Object

Let’s assume we have the following FizzBuzz class to be tested.

class FizzBuzz
  def self.run(n)
    if n % 3 == 0 && n % 5 == 0
    elsif n % 3 == 0
    elsif n % 5 == 0

We want to ensure that FizzBuzz works as expected with RSpec. The target object is an instance of FizzBuzz.

describe FizzBuzz do
  # Test cases

context: Precondition

context is a place to hold the condition that should be satisfied before running the test. It can be a type of input or precondition imposed on the target class. We put the type of input passed to the run method of FizzBuzz.

describe FizzBuzz do
  context '3-multiple' do
    # Test here

  context '5-multiple' do
    # Test here

  context '15-multiple' do
    # Test here

  context 'other' do
    # Test here

it: Expectation

We describe the expected output from the method or object in it (or example).

describe FizzBuzz do
  context '3-multiple' do
    it 'Get Fuzz' do
      expect(FuzzBuzz.run(3)).to eq('Fuzz')
      expect(FuzzBuzz.run(6)).to eq('Fuzz')

  context '5-multiple' do
    it 'Get Buzz' do
      expect(FuzzBuzz.run(5)).to eq('Buzz')
      expect(FuzzBuzz.run(10)).to eq('Buzz')

  context '15-multiple' do
    it 'Get FizzBuzz' do
      expect(FuzzBuzz.run(15)).to eq('FizzBuzz')
      expect(FuzzBuzz.run(30)).to eq('FizzBuzz')

  context 'other' do
    it 'Get original number' do
      expect(FuzzBuzz.run(4)).to eq(4)
      expect(FuzzBuzz.run(8)).to eq(8)

This guideline is so helpful to me for writing the well-structured test in RSpec. The background information behind the scene is explicit with this structure.

How to add new policy to IAM role by Terraform

Security management in a fine-grained manner is a critical component to deploy the enterprise application successfully. Terraform enables us to manage any resource on the cloud service by using the declarative language, HCL. If you are a software engineer providing any service on AWS like me, Terraform gives us the excellent capability and saves us time for sure. I have found a tiny tip to be shared here about the Terraform usage setting the IAM policy. This article aims to explain the use of aws_iam_role_policy and its potential limitations from the practical viewpoint.

Limitation of aws_iam_role_policy

We used aws_iam_role_policy to set the specific IAM policy to a role. It’s the most straightforward and easy way to attach a policy to the role you are managing. But there is a caveat to be noted. The resource can only create inline policy, which is not designed to be shared by multiple roles afterward.

Looking at the following list, you can notice that the policy attached to my-role does not have any name specified. Even if the policy is sufficiently general to be used by other roles, we have no way with aws_iam_role_policy.

resource "aws_iam_role" "my-role" {
 name = "my-role"

 assume_role_policy = <<EOF
  "Version": "2012-10-17",
  "Statement": [
      "Action": "sts:AssumeRole",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      "Effect": "Allow",
      "Sid": ""

resource "aws_iam_role_policy" "my-policy" {
 name = "my-policy"
 role = "${aws_iam_role.my-role.id}"

 # This policy is exclusively available by my-role.
 policy = <<-EOF
   "Version": "2012-10-17",
   "Statement": [
       "Sid": "AccessObject",
       "Effect": "Allow",
       "Action": [
      "Resource": [

Standalone policy with aws_iam_policy

Here comes the aws_iam_policy and aws_iam_role_policy_attachment resources. aws_iam_policy is a resource to create a standalone IAM policy. It’s almost the same as what aws_iam_role_policy does, but it does not attach the policy with any IAM entity such as users, roles, and groups. The policy is isolated and does not affect unless it is attached to the existing IAM entity. aws_iam_role_policy_attachment does that as the name implied. You can attach the existing policy to the existing IAM role. That indicates we can reuse the policy by attaching it to several roles.

resource "aws_iam_policy" "my-policy" {
 name = "my-policy"

 policy = <<EOF
  "Version": "2012-10-17",
  "Statement": [
      "Sid": "AccessObject",
      "Effect": "Allow",
      "Action": [
      "Resource": [

resource "aws_iam_role_policy_attachment" "my-policy-attach" {
  role = "${aws_iam_role.my-role.name}"
  policy_arn = "${aws_iam_policy.my-policy.arn}"

If you have another role named my-role-2, you can attach the my-policy again with the following code’s call.

resource "aws_iam_role_policy_attachment" "my-policy-attach-2" {
  role = "${aws_iam_role.my-role-2.name}"
  policy_arn = "${aws_iam_policy.my-policy.arn}"

That’s a handy way to reuse the existing policy component and be less error-prone because we can avoid rewriting the same policy repeatedly.


We have another resource that has a very similar name, aws_iam_policy_attachment. But we should be careful of the usage of this resource because it attaches the policy exclusively. Across the entire AWS account, only one IAM entity (i.e., users/roles/groups) can be declared by aws_iam_policy_attachement. That limitation is counterintuitive. Using aws_iam_role_policy_attachment will prevent us from wasting time digging deeper into what’s going on when facing an issue.


POST API by Lambda with serverless framework

Serverless is a kind of buzzword in recent years. It brings me a new concept of providing a web service without depending on the fixed amount of server machines (virtually), enabling us to build a more agile and flexible platform responding to changes faster.

Serverless Framework is one of the most notable framework implementing the concept, “serverless”. It supports a lot of major cloud service providers such as AWS, Azure. We can launch a new web-based service with minimal code writing abruptly.

I have created a web API providing a POST endpoint with serverless backed by AWS Lambda and API Gateway. But I needed a little investigation to do so. Therefore, those who are facing the requirement to provide POST API with lambda will find this useful. Here is the guide I would want to have before starting to develop an API.


serverless.yml is a central place controlling all configuration of the infrastructure managed by the serverless application. It specifies the name of the provider, environment variables, and so on.

service: myservice

  # Necessary to purge previous version
  - serverless-prune-plugin
  # Install all dependencies specified by requirements.txt
  - serverless-python-requirements

  name: aws
  runtime: python3.7
  stage: ${opt:stage, 'development'}
  region: us-east-1

custom field provides variables that likely change depending on the environment the application runs.

    - development
    - production
    development: variable_for_development
    production: variable_for_production
    dockerizePip: true
    # Specify the number of retained previous versions
    automatic: true
    number: 10

Function for POST

The function definition for the POST endpoint is easy to write.

    handler: handler.post_endpoint
      - http:
            path: myapp/post_endpoint
            method: post
      # Set the stage specific variable
      A_VARIABLE: ${self:custom.a_variable.${self:provider.stage}}

Since the POST endpoint parses the HTTP request body, there is no need to specify the required parameters in the config.

Handler Method

We can find the POST method in the handler code as follows.

def post_endpoint(event, context):
    print("A POST endpoint")
    # Obtain the body in JSON format
    body = json.loads(event["body"])

We can extract any parameters from the body like body['key']. Note that the validation of the parameter is the responsibility of the handler. The required parameter for the app may be missing in the body. Please make sure to check the existence of the parameter beforehand.

def get_or_none(key, body):
    if key in body:
        return body[key]
        return None

get_or_none('key', body)


Conversion from std to llvm with MLIR

Continuing from the latest article, I’m going to cover another topic of MLIR as well.

mlir-opt is a tool working as a utility to manipulate the MLIR code by applying various kinds of passes and optimizations legally. It enables us to convert a dialect of MLIR to another dialect easily. There is a tremendous amount of functionality and options in mlir-opt. Hence I’m afraid I cannot cover the whole topic of mlir-opt on this small page. (mlir-opt --help emits 372 lines for options!)

The main takeaway of this article will be the primary usage of mlir-opt for the dialect conversion by demonstrating the example from std dialect to llvm dialect. At last, we will see the result returned by the code lowered by mlir-opt. I hope this article will work as a little tutorial of mlir-opt to let you get used to the tools provided by MLIR.


First, let’s write a tiny MLIR code returning an i32 value from the main function. It should work as a hello world program in our case.

func @main() -> (i32) {
  %0 = constant 42 : i32
  return %0 : i32

We define a function named @main receiving no argument and returning a single i32 value. constant is an operation provided by std dialect generating an SSA value with the specified attribute. Finally, it returns the SSA value (%0) with std.return operation working as a termination of the function.

You may expect mlir-opt will convert it to the function returning 42 intuitively. That’s right! We’ll confirm mlir-opt and tools provided by MLIR works as you expected. mlir-opt legalizes std to dialect as follows.

$ mlir-opt --convert-std-to-llvm mytest.mlir
module attributes {llvm.data_layout = ""}  {
  llvm.func @main() -> i32 {
    %0 = llvm.mlir.constant(42 : i32) : i32
    llvm.return %0 : i32

The converted code is printed in stdout. But note that we are still in the world of MLIR, which is not executable directly. It is also necessary to generate LLVM IR from the LLVM dialect code.


Here comes mlir-CPU-runner. This tool provides a JIT environment for MLIR code. It is capable of executing any LLVM dialect code as it is.

$ mlir-opt --convert-std-to-llvm mytest.mlir  | mlir-cpu-runner --entry-point-result=i32

But it also has an option to print the LLVM IR from the given LLVM dialect. --print-module will dump the LLVM IR of the corresponding LLVM module constructed in the JIT environment of mlir-CPU-runner. That allows us to fly away from the world of MLIR and obtain the portable format of the code.

$ mlir-opt --convert-std-to-llvm mytest.mlir  | mlir-cpu-runner \
    --print-module --entry-point-result=i32 > /dev/null
; ModuleID = 'LLVMDialectModule'
source_filename = "LLVMDialectModule"
target datalayout = "e-m:o-p270:32:32-p271:32:32-p272:64:64-i64:64-f80:128-n8:16:32:64-S128"
target triple = "x86_64-apple-darwin19.6.0"

declare i8* @malloc(i64)

declare void @free(i8*)

define i32 @main() !dbg !3 {
  ret i32 42, !dbg !7

define void @_mlir_main(i8** %0) {
  %2 = call i32 @main()
  %3 = getelementptr i8*, i8** %0, i64 0
  %4 = load i8*, i8** %3, align 8
  %5 = bitcast i8* %4 to i32*
  store i32 %2, i32* %5, align 4
  ret void

!llvm.dbg.cu = !{!0}
!llvm.module.flags = !{!2}

!0 = distinct !DICompileUnit(language: DW_LANG_C, file: !1, producer: "mlir", isOptimized: true, runtimeVersion: 0, emissionKind: FullDebug)
!1 = !DIFile(filename: "LLVMDialectModule", directory: "/")
!2 = !{i32 2, !"Debug Info Version", i32 3}
!3 = distinct !DISubprogram(name: "main", linkageName: "main", scope: null, file: !4, line: 2, type: !5, scopeLine: 2, spFlags: DISPFlagDefinition | DISPFlagOptimized, unit: !0, retainedNodes: !6)
!4 = !DIFile(filename: "<stdin>", directory: "/path/to/llvm-project/build")
!5 = !DISubroutineType(types: !6)
!6 = !{}
!7 = !DILocation(line: 4, column: 5, scope: !8)
!8 = !DILexicalBlockFile(scope: !3, file: !4, discriminator: 0)

Since mlir-CPU-runner outputs the code in stderr, I discarded the stdout, which shows the output from the program itself (42 in this case).

Execute the Program on the Host Machine

Okay, now it’s executable on any machine included in the scope of the LLVM target. I’m going to use lli, a tool to execute the program from LLVM assembly.

$ mlir-opt --convert-std-to-llvm mytest.mlir  | \
    mlir-cpu-runner --print-module --entry-point-result=i32 > /dev/null 2> mytest.ll

lli executes the program in the format of LLVM assembly.

$ lli mytest.ll
$ echo $?

It works. It should be fun to rewrite the code in std dialect and play around by seeing the result.


Hello,World with MLIR (2)

Continuing from the last article to create minimal Dialect to print tensor element with MLIR, I am going to illustrate the structure of the codebase of Dialect.

As noted previously, I put the whole repository on Lewuathe/mlir-hello. Please take a look into that if you need to know more.

Code Structure

The official site contains the general guide to create Dialect. Here is the illustration of the structure of the repository.

├── CMakeLists.txt
├── README.md
├── hello-opt
│   ├── CMakeLists.txt
│   └── hello-opt.cpp
├── hello-translate
│   ├── CMakeLists.txt
│   └── hello-translate.cpp
├── include
│   ├── CMakeLists.txt
│   └── Hello
│       ├── CMakeLists.txt
│       ├── HelloDialect.h
│       ├── HelloDialect.td
│       ├── HelloOps.h
│       ├── HelloOps.td
│       └── HelloPasses.h
├── lib
│   ├── CMakeLists.txt
│   └── Hello
│       ├── CMakeLists.txt
│       ├── HelloDialect.cpp
│       ├── HelloOps.cpp
│       ├── LowerToAffine.cpp
│       └── LowerToLLVM.cpp
├── test
│   ├── CMakeLists.txt
│   ├── Hello
│   │   ├── dummy.mlir
│   │   ├── print.mlir
│   │   ├── sample-opt.mlir
│   │   └── sample-translate.mlir
│   ├── lit.cfg.py
│   └── lit.site.cfg.py.in

ODS Declarations

include directory needs to include definitions of Dialect and Operations in Operation Definition Specification format (ODS). ODS is a framework to define the specification of Dialect and Operations declaratively. This framework is powered by the TableGen mechanism maintained in LLVM Core. MLIR generates the C++ code from the ODS declaration. We need to write the following code in CMakeFiles.

# Add the HelloOps for the dialect operations
add_mlir_dialect(HelloOps hello)

# Necessary to generate documentation
add_mlir_doc(HelloDialect -gen-dialect-doc HelloDialect Hello/)
add_mlir_doc(HelloOps -gen-op-doc HelloOps Hello/)

With this directive, CMake automatically generates the header files named HelloOpsDialect.h.inc and HelloOps.h.inc containing C++ code corresponding to the Dialect and operations you defined. We must include these files explicitly in the hand-written header files.


#include "Hello/HelloOpsDialect.h.inc"


#include "Hello/HelloOps.h.inc"

It’s worth noting that HelloOps.h uses preprocessor directive #define GET_OP_CLASSES. Interestingly HelloOps.h.inc contains several distinct sections in a file to fetch the only necessary information as desired by using the preprocessor directive. GET_OP_CLASSES will expand the declarations of operation classes.

Implementation Classes

The code implementing the operation, transformation, etc., should be put in the lib/Hello directory. HelloDialect.cpp needs to have an initializer at least.

#include "mlir/IR/Builders.h"
#include "mlir/IR/OpImplementation.h"

#include "Hello/HelloDialect.h"
#include "Hello/HelloOps.h"

using namespace mlir;
using namespace hello;

void HelloDialect::initialize() {
#define GET_OP_LIST
#include "Hello/HelloOps.cpp.inc"

Note that we use GET_OP_LIST to render all the names of operations supported by Hello Dialect. Similarly, we can write the HelloOps.cpp file as follows.

#include "Hello/HelloOps.h"
#include "Hello/HelloDialect.h"
#include "mlir/IR/OpImplementation.h"

#include "Hello/HelloOps.cpp.inc"

This structure makes clear the separation between Dialect-related implementation and Operation-related implementation.

Passes for Lowering

In addition to these files, the Hello dialect has two files for lowering the Hello code to LLVM. LowerToAffine.cpp and LowerToLLVM.cpp. These passes define the way to convert one Dialect to another dialect. In our case, Hello Dialect must be compiled into the executable format to run it. Since the code is transformed into LLVM IR format, we can execute it. Therefore the goal of these passes is lowering Hello Dialect to LLVM while passing Affine, Standard dialects. In hello-op CLI, we register these passes as follows.

// Register passes to be applied in this compile process
mlir::PassManager passManager(&context);
mlir::OpPassManager &optPm = passManager.nest<mlir::FuncOp>();

We will look into the detail for the transformation and pass the infrastructure itself another time.

The following directive in CMake is required to compile the project properly. You can add additional libraries as you like here if necessary.





Run hello-opt

hello-opt is a tool to convert Hello dialect code to LLVM IR quickly. It loads necessary dialects from the registry. The MLIR module is loaded and transformed into the mlir::OwningModuleRef class.

int main(int argc, char **argv) {
  cl::ParseCommandLineOptions(argc, argv, "Hello compiler\n");

  mlir::MLIRContext context;

  mlir::OwningModuleRef module;
  if (int error = loadAndProcessMLIR(context, module)) {
    return error;


  return 0;

Let’s say we have the following Hello dialect code.

func @main() {
    %0 = "hello.constant"() {value = dense<1.0> : tensor<2x3xf64>} : () -> tensor<2x3xf64>
    "hello.print"(%0) : (tensor<2x3xf64>) -> ()

It defines a constant tensor whose all elements are 1.0 with the shape <2x3>. And print each element according to its tensor shape. Let’s execute it.

Build the project as follows.

mkdir build && cd build

# Path to the LLVM artifacts we build previously
LLVM_DIR=/path/to/llvm-project/build/lib/cmake/llvm \
  MLIR_DIR=/path/to/llvm-project/build/lib/cmake/mlir \
  cmake -G Ninja ..

cmake --build . --target hello-opt

hello-op will dump the LLVM IR into the print.ll file.

# Lower MLIR to LLVM IR
./build/bin/hello-opt ./test/Hello/print.mlir > /path/to/print.ll

You can use lli to execute the LLVM bitcode format interactively.

lli /path/to/print.ll

1.000000 1.000000 1.000000
1.000000 1.000000 1.000000

It works finally!

Besides that, MLIR has many exciting topics to be discussed, such as Interfaces, DRR for rewriting. Please visit the great official website for more about MLIR. I’ll extend the Hello dialect more if I get a chance to do so.