Eduardo Lavaque's Blog https://greduan.com/blog/ Eduardo Lavaque's Blog en-us Mon, 29 Jul 2024 00:00:00 +0000 Mon, 29 Jul 2024 00:00:00 +0000 http://www.rssboard.org/rss-specification Custom Straightforward Programming (by Hasen Judi) https://greduan.com/blog/2024/07/29/straightforward-programming-hasen-judi Mon, 29 Jul 2024 00:00:00 +0000 2024-07-29-straightforward-programming-hasen-judi As I am trying to put together all my thoughts regarding keeping code simple in my last three posts, I came across the Straightforward Programming Manifesto by Hasen Judi.

Haven't asked for permission to reproduce it, so I won't.

But go and read it, I think it's a very good model.

What it does, that I didn't do, is focus on data. Which is essential for the kind of simplicity I'm advocating for.

Normally Data-Driven Design is more around processor optimizations (think SIMD and the like), although of course not exclusively about that.

Hasen has very nicely put together a model for how to think about what makes a Data-Driven Design work in other contexts.

]]>
The Retrieval Augmented Generation (RAG) pattern for LLMs https://greduan.com/blog/2024/07/29/rag-pattern Mon, 29 Jul 2024 00:00:00 +0000 2024-07-29-rag-pattern Foreword

This was written many months ago.

In March was the first draft, second draft was in May. It's almost August and I don't think I'm interested in revisiting this to make it even clearer.

I publish it now so it sees the light of day. It is still in a somewhat dirty state but I hope you find it useful nonetheless.

That being said, the basic information is still accurate, and is still applicable and helpful. It is my best knowledge on the subject.

Preface: Pattern?

Yes, pattern.

Software development patterns are naturally reoccurring problems and problem definitions. The fact that people refer to it as just RAG and not "the RAG pattern" is another sign that it is a true pattern, as it is common enough that people just have a word for it that nobody really claimed as a pattern.

"Patterns" have a set of solutions or approaches that are documented by those that have worked on the problem.

The aim of this article is to make you well familiar with the RAG pattern, and its various solutions.

If you're interested in the topic of patterns, the book Unresolved Forces by Richard Fabian is a hefty but valuable read.

Preface: What this article won't cover

For this article, understanding these in depth is not required.

Introduction

First, if you look at LangChain, a very popular resource for RAG and other AI pipelines, you might have run across this diagram in its RAG documentation:

LangChain RAG diagram

And then it goes on to describe how LangChain solves these steps.

But it doesn't really define these steps.

So let's start with the basics.

What is an LLM?

What is RAG?

Large Language Model (LLM)

LLM stands for Large Language Model, a technology in which an AI becomes very "intelligent" and useful simply by being an auto-complete machine learning mechanism trained on trillions of data points, to the point it develops enough shortcuts to seem intelligent.

When you ask for "a story about a little girl that dresses in pink", the LLM goes and basically "auto-completes" a story by figuring out what kind of text is related or is considered related to the prompt given, and it does so in a coherent way because the texts it used for education were also coherent.

A cool field, and having its bubble right now, it will pop someday and we'll be there to see it.

There are many models, commercial and open source.

Personally I'm familiar most with OpenAI's GPT models and Anthropic's Claude.

You might have heard of Facebook's LLaMA, and there are also many other open source ones.

Retrieval Augmented Generation (RAG)

Retrieval Augmented Generation.

The LLM generates text based on the prompt given to it. As in the example above "a story about a little girl that dresses in pink".

But what if you wanted to give it more context?

"A story about a little girl that dresses in pink, named {{ name }}."

What if in your DB you have different girls' names, and you want to use a different one depending on the current active user?

That's what RAG is.

For this simple example, for each user we'd fetch the user record, and get the name, and inject it in there. So some variations of the prompt would be:

  • "A story about a little girl that dresses in pink, named Jenny."
  • "A story about a little girl that dresses in pink, named Stacy."
  • "A story about a little girl that dresses in pink, named Maria."

In other words, before you send your prompt to the LLM so it goes and does its deep neural network magic, you retrieve the most relevant data for the prompt, so that the prompt is as accurate and pertinent as it can be.

Applications vary, but common use cases include searching content and fact retrieval via Q&A.

Why RAG?

Couldn't I just give ALL the data to the LLM in the prompt, it's intelligent enough to figure it out, right?

Well, no. It's not intelligent.

And there's context window limits. If your limit is 128 tokens (each one is roughly 4 characters, depends slightly on the model), you can't fit your entire database in there, plus the system prompt, plus the generated output. At least in most cases.

Some LLMs like Claude are trying to break these barriers. But it's not cheap. And putting all of that content in the prompt will bias Claude's output.

Pro tip: rely as little as possible on the LLM being accurate, because it can be very inaccurate, randomly.

Overall for these reasons, for the use case of to improve the quality of the generated text, by augmenting the LLM's prompt with precise data, RAG is still the preferred method.

With that understanding in mind, let's move forward and talk about the steps described by that LangChain diagram.

Two phases: Preparation and Retrieval

Generally I think of the whole flow as two phases:

Preparation and Retrieval.

In Preparation we usually have:

  • Source
  • Load
  • Transform
  • Embed (of the data)
  • Store

And in Retrieval we usually have:

  • Embed (of the user's prompt)
  • Retrieve

And finally, what we're usually building up to, although an optional step:

  • Generation

Preparation: Source

Your Source is anything. Video, audio, text, images, PDFs, Word docs, what have you.

These contain text of some sort, which you've got to extract in one way or another, to make useful for later in the process.

It could also be images or other media associated with the text that you're going match against.

Input Output
Anything

Preparation: Load

In this diagram it is called Loading. I think of it more as Extraction the text.

The purpose of this step is to get some raw text from your Source.

Depending on the source, you may need to apply special algorithms or methods to extract the content uniformly and as cleanly as possible.

If you wanted to search over a video's contents, Loading could mean transcribing the video for example, so then you have its text to work with an LLM.

Popular tool here is Unstructured. Personally I've used it for PDFs and it does an OK job of structuring the data, although PDFs as a format are very dirty.

Input Output
Anything Text

Preparation: Transform

Now you have your raw text data.

Now you need to Transform it into useful chunks.

The usefulness of the chunk is primarily defined by whether it represents an independent "fact".

Essentially you need to split your data up into the smallest individual chunks of information that are useful for your application, or you could combine multiples of them (have the smaller chunks, and make up larger chunks from the smaller ones too). Depending on what's valuable for your application.

A chunk might be something like any of the sentences here:

Now you have your raw text data.

Or it could be longer, a paragraph:

Essentially you need to split your data up into the smallest individual chunks of information that are useful for your application, or you could combine multiples of them (have the smaller chunks, and make up larger chunks from the smaller ones too). Depending on what's valuable for your application.

But it always represents one unit of data that would be useful for your users, or to educate the prompt which then your users will find value in its generation

You need to experiment and find out what kind of chunks produce the kind of output your application needs for the user prompts that you allow or expect.

For example you could ask an LLM to split it up semantically for you, but that's expensive. Or you could split it up by sentence and associate some metadata (like the page number) and then use the whole page as reference, that's cheap, but it also might not be what you're looking for.

The single-most useful resources I have found are these, to give you some examples of how to work through a set of text to split it up:

Input Output
Text Chunks of independent text

Preparation: Embed

This is a straightforward step to do, but it will require some explanation to understand it.

In short, you are taking one of the chunks that you've split up in the previous Transform step, and assigning it weights based on a particular LLM's algorithm.

To really understand what this is about, I suggest you read this article from OpenAI's article on the subject of text and code embeddings. It has simple aesthetic visuals to really help you understand what it means to "embed" something.

THIS STEP IS OPTIONAL. It's worth noting. If you don't need semantic search capabilities, this step does not offer you value.

Options for this are OpenAI's algorithms or Claude's recommendation of VoyageAI, both hosted, or open source solutions that run locally.

# Example using VoyageAI

def embed(text: str) -> list[float]:
    return vo.embed([text], model="voyage-2", input_type="query").embeddings[0]

chunk = "The Sony 1000 is the newest iPhone from Sony."

embedding = embed(chunk)
Input Output
Chunk Embedded chunk (array of vectors)

Preparation: Store

Storage and Retrieval are both straightforward. Embedding, storing, and retrieving are so closely coupled that it's tricky to speak about them in different sections.

Storage boils down to:

How and where are you going to store your embeddings and associated metadata?

Your choice will impact the following:

  • Your development experience
  • How you can retrieve embeddings
  • What kind of metadata you can store with your embeddings

The choice boils down to specialized vector databases and traditional databases with vector support.

Input Output
Embedded chunk Record in the database

Why your database needs vector support

The necessity for vector support isn't for storage, but rather for retrieval. Storing vectors is easy, just JSON stringify them and store the JSON string.

But for retrieval if the database doesn't have vector search support, searching for a vector match would require some code akin to the following pseudo-code:

iterate through batches of records with vectors from DB
  for each batch
    for each record
      turn the JSON into a list/array in your language
      run a proximity match algorithm
      if the match is good enough for your use case, add to matches
sort matches
get first X matches

As you can see this is an O(n) operation, as you have to process all of your records one by one for a match, in your application code.

Specialized vector databases

This might be what you need for seriously large amounts of vectors, in the order of millions, purely from a cost and performance perspective.

In addition these were the only real option you had, before traditional DBs got vector support added.

Hosted:

(Self-)hosted:

You'd probably run these databases in addition to your traditional database, just to store the vectors.

As you might guess, this adds complexity to your setup.

Traditional databases with vector support

This means either MongoDB or SQL databases, with added vector support.

MongoDB has official support since last year (2023) with Atlas Vector Search, I haven't had a chance to use it.

For SQL we have some options depending on our SQL flavor of choice.

Let's talk about metadata

So far we've just been talking about plain chunks, like the raw text from the content we're transforming. But we often need metadata to make these things useful.

The utility of objects

At this point it will actually be useful for you to start using objects.

If I were to use Python, I'd recommend either Pydantic or the built-in TypedDict, but otherwise you probably will need to start associating data to the chunk.

import pydantic

class Chunk(pydantic.BaseModel):
    text: str
    embedding: list[float]

Metadata in the chunk itself

In preparing the chunk for embedding, you don't necessarily have to include only the raw data. You could include metadata into the chunk itself as well.

E.g. if you have the following chunk:

The Sony 1000 is the newest iPhone from Sony.

You can actually play a trick with the LLM, and make it include certain data in its processing, without actually making it part of the chunk.

For example:

popularity_description = "Very popular."
release_date_description = "Released recently."

text_to_embed = f"The Sony 1000 is the newest iPhone from Sony.\n{popularity_description}\n{release_date_description}"

embedding = embed(text_to_embed)

chunk = Chunk(
    text="The Sony 1000 is the newest iPhone from Sony.",
    embedding=embedding,
)

You see we included some extra information in the chunk we will embed, separate from the actual text that the chunk is.

This allows the user's prompt to match more accurately for certain phrases (for example if the user includes the word recent in their prompt).

Input Output
Text Chunks of independent text (with metadata)

Metadata to accompany the chunk

In reality, at this point you probably don't want to just export chunks from this step. You actually want to export dictionaries of data, one of the bits of which is a text chunk.

For example if you're transforming a PDF file into chunks, you might have an object like this one, which includes some metadata like the page number or the paragraph number in relation to the whole PDF:

import pydantic

class PdfChunk(pydantic.BaseModel):
    text: str
    page_num: int
    paragraph_num: int
    embedding: list[float]

We will explore the utility of this later.

Example with Django and PGVector

We'll talk about metadata's value in retrieval more in detail later, but you can see in this example how we can actually easily embed our data into our Django model, and we can also include metadata like the page_num and the chunk_num.

from django.db import models
from pgvector.django import L2Distance, VectorField

class PdfChunk(models.Model):
    text = models.TextField()
    page_num = models.PositiveIntegerField()
    chunk_num = models.PositiveIntegerField()
    embedding = VectorField()

text = "Lorem ipsum."

chunk = PdfChunk(
    text=text,
    page_num=3,
    chunk_num=100,
    embedding=embed(text),
)

chunk.save()

# Later, when retrieving

user_prompt = "lol"
user_prompt_embedding = embed(user_prompt)

top_chunk = (
    PdfChunk.objects.annotate(
        distance=L2Distance("embedding", user_prompt_embedding)
    )
    .order_by("distance", "order")
    .first()
)

Retrieval: Embed

In the same way that we had to embed our content, we need to embed the user's prompt.

In this way, we get a mathematical representation of the user's prompt, that we can then compare to the chunks in our database.

This does not differ from the embedding above.

user_prompt = "What is the Sony 1000?"

embedding: list[float] = embed(user_prompt)
Input Output
User's prompt Embedded user's prompt (array of vectors)

Retrieval: Retrieve

Now you have embeddings in your database, with associated metadata, and you have your user's prompt's embedding as well.

Now you can query your DB for similar embeddings.

In essence, you're just running maths on your various embeddings (vectors).

You're trying to figure out the distance between one embedding and another. The smaller the distance, the more similar, and thus the more relevant they would be considered.

Once you have the most relevant embedding(s), you also have the associated text you initially embedded (hopefully) and you have also any related metadata.

With that you can finally give something useful to your user.

You could return the content as-is, which could be useful as well, or run it through one more step, generation.

Input Output
Embedded user's prompt 👇
The chunks in your database 👇
Relevant chunks

Fetching context for the top match

If you have metadata, like for example the page number or the paragraph number or something like that, you could actually then fetch those related records.

For example you could do something like this to have the most relevant chunk and the surrounding chunks, which could be useful depending on your use case:

from django.db import models
from pgvector.django import L2Distance, VectorField

# Where the model looks like

class PdfChunk(models.Model):
    text = models.TextField()
    page_num = models.PositiveIntegerField()
    chunk_num = models.PositiveIntegerField()
    embedding = VectorField()

# We could do something like

user_prompt = "lol"
user_prompt_embedding = embed(user_prompt)

top_chunk = (
    PdfChunk.objects.annotate(
        distance=L2Distance("embedding", user_prompt_embedding)
    )
    .order_by("distance", "order")
    .first()
)

surrounding_chunks = (
    PdfChunk.objects.filter(
        chunk_num__gte=top_chunk.chunk_num - 2,
        chunk_num__lte=top_chunk.chunk_num + 2,
    )
    .order_by("chunk_num")
    .all()
)

context_text = map(lambda chunk: chunk.text, surrounding_chunks)

Of course there are many ways to work the magic of finding the most relevant chunks.

Generation

This is the step that I think people are most familiar with. It's the use-case you see when you use ChatGPT or when you've heard of any generative AI use-case.

So let's say the user asked us:

What is the Sony 1000?

We'd depend on the model's base knowledge in order to answer this question generatively without RAG.

With RAG, the same question would be filled with real context and real answers.

Our flow looking roughly like this:

# retrieval
embed the user's prompt
check our RAG database for relevant text based on the user's prompt
we find our top chunk

# generation
we ask the AI to please present the information in a user friendly way based on the data in the chunk

Notice we don't use the user's prompt during the generation itself, only for the RAG.

Security, private vs. public applications

Private = Only a limited amount of trusted users will access it.

Public = A limited or unlimited amount of untrusted users might use it.

For public applications you never want to include the user's prompt directly into the prompt for the generation.

For a simple reason: There is no good way to safeguard against bad actors.

In private applications the use case assumes that there won't be bad actors, and if there are the damage will not be widespread.

In conclusion

I hope that was useful. I've explained to you how the full picture looks on the full stack, from data → DB → prompt.

This information will outlive the daily changes to the meta, because these are the fundamentals.

Misc.: Should I use LangChain?

In my opinion, no.

You tie yourself to an ecosystem in exchange for some syntax sugar. Bad trade.

Thus I had to do all of the above research on my own, and figure all these things out through experimentation.

Further reading

The Illusion Intelligence by Baldur Bjarnason.

He actually researched the subject, while I only had intuitions about it. His findings seem to match my understanding on the matter.

]]>
A workable definition of simple https://greduan.com/blog/2024/07/27/a-workable-definition-of-simple Sat, 27 Jul 2024 00:00:00 +0000 2024-07-27-a-workable-definition-of-simple The oft-debated question.

I'll try to put it together as I see it at this moment.

In my last blog post about OOP and simplicity I already talked about simplicity. But this is a different context so I will expand on it.

The dictionary's definition of simple

From my last blog post I shared the following:

Looking at multiple dictionaries I find the following common definitions:

  1. easily understood or done; presenting no difficulty
  2. plain, basic, or uncomplicated in form, nature, or design; without much decoration or ornamentation
  3. composed of a single element; not compound.

And looking at the etymology (the root of the word, where it came from and how it came to be), from Etymonline.com I find the following:

The sense evolution is from the notion of "without parts" or "having few parts," hence "free from complexity or complication."

Link to the full etymology of "simple".

The computer's definition of simple

For the computer simple very precisely means one thing, less operations.

Obviously the computer doesn't think, but if it could think, since it is the one running the program, this would be its definition of simple.

The programmer's definition of simple

This is the tricky one.

One programmer might think 10 classes for one operation is simple, while another might think one function is complex enough.

Programmers confuse ease with simplicity.

Ease, or easy, is about familiarity.

So if one programmer is used to LISP, it will seem easier for him, than another that is used to C, who will find LISP very difficult initially.

Easy is about familiarity.

And therein lies the issue.

Processors are not familiar to us.

We don't think like processors do.

So we try to make things easier, confusing it with simpler. From machine code to Assembly, from Assembly to C, and so on as the language gets more high level, and then with all kinds of abstractions.

Two kinds of programming

I learnt a particular concept associated with the computers I use every day, quite early on as a programmer.

Computers are incredibly fast, accurate and stupid. — Unknown

And it went along with the concept of "thus, your job as a programmer is to drop down to the level of a computer, to understand what it's doing and how to tell it what to do".

That's one style of programming.

The other is the opposite.

Assume that the human is the source of truth for how things should be.

Modern OOP, the most popular paradigm of programming, is in this category.

Modern OOP is obsessed with, well, object-orientation. And this lends itself to abstraction on top of abstraction.

How can I make the real world fit a computer representation and bend the computer to my will.

The problem here is that computers don't understand OOP.

So when you actually want your computer to do OOP, you still have to bend to the computer's will.

More specifically, you have to bend yourself to the data.

Data is king

The computer understands only operations.

But what does it operate on?

DATA.

I.e. information. The bits and bytes running through the wire, ostensibly representing something that humans find useful, normally. But otherwise representing something that computers find useful (as with API protocols).

Therefore so far there's two things the computer cares about. The data to operate on, and the operations.

Being humble

We as programmers should probably realize our position.

We are programmers.

By definition, our job is to interface with the processor, in more, or less direct ways.

Perhaps we should take that into account, and consider what would be SIMPLE FOR THE COMPUTER.

If we adopt this model, perhaps other programmers would have an easy time understanding what's happening too. Because it's clear what's happening.

The computer is stupid. So we can easily figure out what a bit of code is doing if we keep it on the computer's level.

Probably we shouldn't program in Assembly, but we don't need to involve 10 classes/interfaces for 10 lines of procedural code.

The cost of not being humble

We've seen the cost.

The cost of adding abstraction on top of abstraction, on top of yet another abstraction, instead of simply speaking the computer's language.

Software that doesn't do very much more than what it used to do 20 years ago, but somehow it performs way worse. Not just way worse, but orders of magnitude worse.

Moore's law has effectively been nullified by our obsession with not keeping things computer simple.

And did programmers become more productive as a result? I don't think so. We're fixing the same amount of bugs, spending the same amount of time arguing about solutions. And in fact now we spend a lot of time just churning, with new languages and new frameworks coming out so often.

A new definition of simple

Taking all of the above, and putting it together into a cohesive model, here's a proposal for a definition of simple.

The smallest set of instructions for a computer to operate on defined data, fulfilling the desired user or business requirements.

Usually, less instructions means less code, means less operations. That means less bugs. Less code = less bugs.

Usually, less instructions means less to keep in your head, which means simpler and less complex. Which means easier to read and understand.

Less complex (interconnected) means easier to modify without breaking something else. Which means more maintainable.

Less complex (interconnected) means easier to just throw away, and rewrite. Which again means more maintainable.

I think the above is quite a workable definition.

If for business reasons the solution requires an event-based system, that's fair enough. Sometimes the essential complexity of a problem does require that. But then probably you're better served by the languages built for those use cases (e.g. Erlang in the telecom sector)

If it requires no more than 10 lines of procedural code, then you'd be writing complex code by not keeping it to 10 lines of procedural code.

]]>
Clean Code(tm), SOLID, OOP, and Gangs of Four are not simple, by definition https://greduan.com/blog/2024/07/25/oop-is-not-simple-by-definition Thu, 25 Jul 2024 00:00:00 +0000 2024-07-25-oop-is-not-simple-by-definition Many will be triggered by the title, but that's all right, I'll lay out my argument.

"By definition" meaning if you look strictly at the definition of the words involved.

So let's do that.

The definition of simple

Looking at multiple dictionaries I find the following common definitions:

  1. easily understood or done; presenting no difficulty
  2. plain, basic, or uncomplicated in form, nature, or design; without much decoration or ornamentation
  3. composed of a single element; not compound.

And looking at the etymology (the root of the word, where it came from and how it came to be), from Etymonline.com I find the following:

The sense evolution is from the notion of "without parts" or "having few parts," hence "free from complexity or complication."

Link to the full etymology of "simple".

The definition of complex

In a similar guise, I find the following common definitions for complex:

  1. consisting of many different and connected parts
  2. not easy to analyse or understand; complicated or intricate

And looking at the etymology from Etymonline I find the following:

composed of interconnected parts, formed by a combination of simple things or elements

Link to the full etymology of "complex".

The argument

Looking at the above I hope my argument is self evident to anybody familiar with the real world code that Clean Code(tm), SOLID, OOP, and Gangs of Four adherents produce.

My argument is as follows.

Looking strictly at the definitions and etymologies.

Clean Code(tm), SOLID, OOP, and Gangs of Four by default violate definitions #2 and #3 of simple, and are in accordance with definition #1 of complex.

Definitions #1 of simple and #2 of complex are arguable. But I think you know what side of the fence I'm on.

These methodologies actively encourage complex and intricate configurations of software concepts in order to fulfill business needs, real or imaginary.

Particularly let's focus on the following concepts:

Object-oriented Programming

In its strictest sense, programming oriented around objects. Objects (normally defined by classes) are meant to represent a real world equivalent as a structure in code.

OOP encourages data definitions, behaviour definition, and execution, to be spread all over the place.

By definition that is complex, and is not simple.

Naturally one can argue that that is up to the programmer. But paradigms carry cultures with them, and OOP has this complex culture.

Clean Code(tm)

I'm careful to say Clean Code(tm) and not clean code. Robert C. Martin pushes a very particular kind of code as Clean Code(tm) and then says "that's your definition of clean code" when you make a valid point against it.

If you seriously follow his rules, your code ends up being hell to 1) find out where anything actually happens, and 2) keep in mind what actually is happening and where you are in the logic at any given moment.

Just as one example, the practice of "small functions, no bigger than 5-10 lines of code" causes you to write 5 functions for one business logic operation. Causing you to jump around to 5 different spots just to know what is actually happening.

By definition that is complex, and is not simple.

Many will argue that it also has good parts. That's OK, but the good parts are universally just "good advice", and not at all unique to Clean Code(tm).

Sidenote: If you want an example of what Clean Code (the book) should have been, take a look at The Art of Readable Code by Dustin Boswell & Trevor Foucher.

Gangs of Four (GoF)

The Gangs of Four are a collection of "set ways to solve set problems", a collection of patterns for OOP languages.

Often however the problem is poorly defined while the solution is well defined. Leading to them being an inflexible set of solutions to an infinite array of problems.

It introduces a situation where to "properly solve" a problem, you have to do it in the "standard GoF pattern", which are normally ill fit for the problem at hand.

Their solutions, being heavy on OOP, involve lots of classes, inheritance, and behaviour being spread across multiple files and functions.

In addition you must be familiar with the GoF pattern being used in order to understand the cryptic and often unintuitive naming, and hopefully have some guide as to what mess the programmer implemented in order to adhere to a predefined solution for an ill defined problem.

By definition that is complex, and is not simple.

Sidenote: If you want to read more about patterns and what they were intended to be by the inventor of the modern concept, an architect, read the book Unresolved Forces by Richard Fabian. A long read but quite informative.

SOLID

This is perhaps the most innocent of these. It's a set of guidelines for how to design your classes. It is inextricably linked to OOP, therefore the best it can do is not be complex, with very careful application, a sprinkle of luck dust, and lots of work, it can perhaps be simple.

But let's look at just one of the things SOLID promotes, getter and setter methods.

To reach for a simple value from a class, that the codebase implemented for itself, and not for any library use, I MUST call a method. Even to access a value, I must go through a layer of indirection. Even to set that value, even if it's a plain value, I MUST call a method.

By definition that is more complex than it is simple.

"But what if in the future you need to add logic" then you rename the property to have the classic _ prefix and your compiler will now let you know of all the places where you need to do update the code to use the setter.

Gall's Law

A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system.
— Gall's Law, from The Systems Bible (John Gall, 2002)

Further reading

]]>
Essential and Accidental complexity, and performance https://greduan.com/blog/2024/07/25/essential-and-accidental-complexity-and-performance Thu, 25 Jul 2024 00:00:00 +0000 2024-07-25-essential-and-accidental-complexity-and-performance In case you've never heard of these terms, they can be defined simply as follows, starting by the basic words:

Simple means that it is plain, plainly visible, easy to understand, individual and probably not composed of many parts.

Complex means to make an interconnected whole, usually composed of simple(r) parts.

Essential complexity is part and parcel of the problem. It's intrinsic. You cannot get rid of it. It just comes with the problem. Only way to get rid of it is to change the problem. For example if you're launching a rocket, essential complexity would include the fact that you have to break off from the pull of gravity.

Accidental complexity is what you add on top. In other words it's what the development team (particularly the programmer) adds in complexity. Normally comes in some form of "code has to ..." rules, but could simply be poor design.

The concept evolved first from the paper No Silver Bullet—Essence and Accident in Software Engineering by Fred Brooks.

It was later expanded on by the paper Out of the Tar Pit by Ben Moseley and Peter Marks.

While I don't agree with the papers' conclusions entirely, these terms are an extremely useful model to think about complexity.

Why would you add accidental complexity? For one of three reasons:

  1. Skill issues
  2. Faithfully following practices that add complexity (Clean Code(tm), SOLID, OOP, GoF, etc.)
  3. Management forced you to

The first two, funnily enough, can be improved by simply looking at skillful programmers program under high performance constraints.

Their solutions may be unusual for most of us, but due to the high performance constraints they are forced to keep things simple, otherwise it will not perform well enough.

So this is my thought of the day: perhaps to keep things high performance we just need to keep them simple, and stop adding abstraction on top of abstraction just to stay in line with some "it's how we should do things" pattern of thought.

The software industry rarely produces high quality software, maybe it's time we took a second look at what we consider "standard practice".

Further reading

]]>
Learnings from Gadget Software https://greduan.com/blog/2024/07/18/learnings-from-gadget-software Thu, 18 Jul 2024 00:00:00 +0000 2024-07-18-learnings-from-gadget-software Sanath (my ex business partner) and I have decided to part ways. We worked together for 2 years and a bit running a bespoke software agency, where I did the development work and he did marketing & sales.

In honor of our partnership, I decided to write down my main learnings from this adventure.

One use case to lead to immediate ROI

Sanath taught me how to have level of focus that is unnerving to our customers, with regards to delivering ONE use case that provides the immediate ROI to the customer, and turning down customer demands for other things until that is achieved.

Yes the customer might need 10 different flows, but we can only prove ROI on one of them at a time.

They're spending their time and resources on us, the least we can do is get them ROI ASAP.

This is essentially a Customer First approach. And often protects the client from himself, as clients very often are not very familiar with software development in general.

Python/Django

Early on Sanath made the executive decision that I will be using Django. At the time I was most familiar with C# and Angular, with old familiarity for Node.js.

Him forcing me to use Django actually forced me to learn Python. My third back end language. For which I am very grateful.

Python has been very useful in other areas for quick scripting and pseudo-code.

In addition Django taught me that a framework can be very full featured, but still get out of your way by default. It is very much my default now if I need to just get something running quickly.

Patience

Working with Sanath helped me wrap my mind around long term thinking. He always had a very clear vision for the long term future.

He also taught me patience around financial goals. I have a perhaps bad habit of wanting things NOW, or within 3-6 months. When in a realistic timeline they might take longer. Sanath taught me to think around those longer timelines.

Arrogance

He recommended a particular book that has been pivotal in my mindset since I read it.

Linchpin: Are You Indispensable, by Seth Godin.

Totally turned my mind around and addressed some arrogance and impatience issues I was having professionally.

LLMs and RAG

We were using GPT text models since they came out.

But it wasn't until he insisted, that I actually learnt how LLMs work and how RAG works.

Now I have a rather complete understanding of RAG which probably not very many people have. For which I hope to soon publish a general guide BTW.

Senior software engineer

Thanks to the opportunities at Gadget Software I could actually graduate into seniorhood as a software engineer. The ability to be given a problem statement, wrap your head around it, experiment and prototype, communicate with the customer about it, perhaps challenge the proposed problem and desired solution, then provide a high quality working solution, independently, was something that I learnt at Gadget Software.

Project manager

I was actually given direct communication channels with the customers, allowing him to step back and focus on other things, and mentor me in the meanwhile.

That communication with the customer gave me insights into how users think, and how to communicate with them challenges and sell them on solutions.

That gave me some serious headway into the concept of project management. Organizing work for a project and leading the project to success while balancing development team and customer needs and desires.

In summary

I learnt a lot, and I owe a lot to Sanath. I am very grateful to him for his patience, his mentorship and guidance, and his willingness to take a risk on me.

The lessons learnt I will carry for a lifetime, and will serve me well in my career going forward.

]]>
Drawing text on a window with Odin - Part 1: GLFW https://greduan.com/blog/2024/07/06/drawing-text-on-a-window-with-odin-part-1-glfw Sat, 06 Jul 2024 00:00:00 +0000 2024-07-06-drawing-text-on-a-window-with-odin-part-1-glfw Based in great part on the code in the following Gist, thank you to the original author!

GLFW, OpenGL Window Tutorial in Odin language

package main

import "core:fmt"
import "core:c"
import "vendor:glfw"
import gl "vendor:OpenGL"

GL_MAJOR_VERSION :: 4
GL_MINOR_VERSION :: 1

should_exit := false

main :: proc() {
	fmt.println("Hellope!")

	if glfw.Init() != glfw.TRUE {
		fmt.println("Failed to initialize GLFW")
		return
	}
	defer glfw.Terminate()

	glfw.WindowHint(glfw.RESIZABLE, glfw.TRUE)
	glfw.WindowHint(glfw.OPENGL_FORWARD_COMPAT, glfw.TRUE)
	glfw.WindowHint(glfw.OPENGL_PROFILE, glfw.OPENGL_CORE_PROFILE)
	glfw.WindowHint(glfw.CONTEXT_VERSION_MAJOR, GL_MAJOR_VERSION)
	glfw.WindowHint(glfw.CONTEXT_VERSION_MINOR, GL_MINOR_VERSION)

	window := glfw.CreateWindow(640, 480, "Todo", nil, nil)
	defer glfw.DestroyWindow(window)

	if window == nil {
		fmt.println("Unable to create window")
		return
	}

	glfw.MakeContextCurrent(window)

	// Enable vsync
	glfw.SwapInterval(1)

	glfw.SetKeyCallback(window, key_callback)
	glfw.SetMouseButtonCallback(window, mouse_callback)
	glfw.SetCursorPosCallback(window, cursor_position_callback)
	glfw.SetFramebufferSizeCallback(window, framebuffer_size_callback)

	gl.load_up_to(GL_MAJOR_VERSION, GL_MINOR_VERSION, glfw.gl_set_proc_address)

	for !glfw.WindowShouldClose(window) && !should_exit {
		gl.ClearColor(0.2, 0.3, 0.3, 1.0)
		gl.Clear(gl.COLOR_BUFFER_BIT) // clear with the color set above

		glfw.SwapBuffers(window)
		glfw.PollEvents()
	}
}

key_callback :: proc "c" (window: glfw.WindowHandle, key, scancode, action, mods: i32) {
	if key == glfw.KEY_ESCAPE && action == glfw.PRESS {
		should_exit = true
	}
}

mouse_callback :: proc "c" (window: glfw.WindowHandle, button, action, mods: i32) {}

cursor_position_callback :: proc "c" (window: glfw.WindowHandle, xpos, ypos: f64) {}

scroll_callback :: proc "c" (window: glfw.WindowHandle, xoffset, yoffset: f64) {}

framebuffer_size_callback :: proc "c" (window: glfw.WindowHandle, width, height: i32) {}

There are some changes we need to make from the code in the Gist.

  1. Your window hints have to be after the glfw init. Otherwise, it will crash.
  2. The original code is missing some hints for macOS and they're in the wrong order.

In macOS one needs to specify GLFW_OPENGL_FORWARD_COMPAT and GLFW_OPENGL_PROFILE BEFORE the GLFW_CONTEXT_VERSION_MAJOR and GLFW_CONTEXT_VERSION_MINOR hints.

GLFW docs specify as much:

macOS: The OS only supports forward-compatible core profile contexts for OpenGL versions 3.2 and later. Before creating an OpenGL context of version 3.2 or later you must set the GLFW_OPENGL_FORWARD_COMPAT and GLFW_OPENGL_PROFILE hints accordingly. OpenGL 3.0 and 3.1 contexts are not supported at all on macOS.

]]>
The pleasure of writing Clean OOP code /s https://greduan.com/blog/2024/06/30/the-pleasure-of-writing-clean-oop-code Sun, 30 Jun 2024 00:00:00 +0000 2024-06-30-the-pleasure-of-writing-clean-oop-code Here's the task:

  • You have an endpoint, and you are given a_id and b_id, you have to associate in the DB a with b.
  • And in addition you have to associate a.c with b.
  • a can only have one b associated with it, and if it already does, then it shouldn't be reassigned, should just return success.
  • c can have multiple bs associated with it, although of course each one only once.
  • You have to treat the whole thing as a transaction, and only save changes once you've done all changes successfully.

Now the above is almost the pseudocode, but let's do it in proper pseudocode, AKA Python that kinda looks like Django but not quite:

def assign_b_to_a_and_c(a_id: str, b_id: str):
    try:
        a = A.objects.get(id=a_id)
    except A.DoesNotExist:
        return 404

    # Already done, skip
    if "b_id" in a and a.b_id is not None:
        return 200

    # Just make sure it exists
    try:
        B.objects.get(id=b_id)
    except B.DoesNotExist:
        return 404

    # Associate A with B
    a.b_id = b_id

    try:
        c = C.objects.get(id=a.c_id)
    except C.DoesNotExist:
        return 404

    # Associate C with B
    if "b_ids" in c and c.b_ids is not None:
        c.b_ids += b_id
    else:
        c.b_ids = [b_id]

    with transaction.atomic():
        a.save()
        c.save()

I'd say it's rather straightforward, easy to follow and to understand.

This is what I like to call simple code, at least as simple as it can be in Python, ignoring the kinda ugly try/except syntax and so on.

I can read it top to bottom, it fits in one screen in this case, and I know what's happening at any given moment, no surprises.


Code review: We need to follow the architecture rules.

OK.

Let's add some OOP to this, because actually while here we do it, we do in fact need to do these same operations in other parts of the code, so let's DRY a little.

# views.py
def assign_b_to_a_and_c(a_id: str, b_id: str):
    # ... same as before ...

    # Associate A with B
    a.assign_b(b_id)

    # ... same as before ...

    # Associate C with B
    c.add_b(b_id)

    # ... same as before ...
# models.py
class A(models.Model):
    # ...
    def assign_b(self, b_id: str):
        # Oops some duplication, whatever
        if "b_id" in self and self.b_id is not None:
            return
        self.b_id = b_id

class C(models.Model):
    # ...
    def add_b(self, b_id: str):
        if "b_ids" in self and self.b_ids is not None:
            self.b_ids += b_id
        else:
            self.b_ids = [b_id]

Hmm... suddenly I can't read the code in one go. And unless I'm familiar with what a.assign_b() and c.add_b() do, which as a first-time reader I hope are named correctly, I have to jump a file or two to figure out what's happening.

No biggie, this is normal.


Let's go a bit further to follow the proper architecture rules.

Every time we assign a label, we actually want to save the file, says someone. So when we call a.assign_b() we are going to also save. Reasonable statement, especially if in most cases this is what we intend to do.

# views.py
def assign_b_to_a_and_c(a_id: str, b_id: str):
    # ... same as before ...

    # Associate A with B
    a.assign_b(b_id)

    # ... same as before ...

    # Associate C with B
    c.add_b(b_id)

    # ... same as before ...
# models.py
class A(models.Model):
    # ...
    def assign_b(self, b_id: str):
        # Oops some duplication, whatever
        if "b_id" in self and self.b_id is not None:
            return
        self.b_id = b_id
        self.save()

Those of you that are following along will realize, this breaks one of the requirements: this was meant to run as a transaction.

During PR review somebody realizes this and requests that the programmer fixes it.

So he does.

# views.py
def assign_b_to_a_and_c(a_id: str, b_id: str):
    try:
        a = A.objects.get(id=a_id)
    except A.DoesNotExist:
        return 404

    # Already done, skip
    if "b_id" in a and a.b_id is not None:
        return 200

    # Just make sure it exists
    try:
        B.objects.get(id=b_id)
    except B.DoesNotExist:
        return 404

    try:
        c = C.objects.get(id=a.c_id)
    except C.DoesNotExist:
        return 404

    # Associate C with B
    if "b_ids" in c and c.b_ids is not None:
        c.b_ids += b_id
    else:
        c.b_ids = [b_id]

    with transaction.atomic():
        a.assign_b(b_id)
        c.save()

Well obviously that's ugly so let's change c.add_b() as well:

# views.py
def assign_b_to_a_and_c(a_id: str, b_id: str):
    # ... same as before ...
    with transaction.atomic():
        a.assign_b(b_id)
        c.add_b(b_id)
# models.py
# ...
class C(models.Model):
    # ...
    def add_b(self, b_id: str):
        if "b_ids" in self and self.b_ids is not None:
            self.b_ids += b_id
        else:
            self.b_ids = [b_id]
        self.save()

OK we're back to a normal scenario, and now things are transactional again.

This is how our code looks right now:

# views.py
def assign_b_to_a_and_c(a_id: str, b_id: str):
    try:
        a = A.objects.get(id=a_id)
    except A.DoesNotExist:
        return 404

    # Already done, skip
    if "b_id" in a and a.b_id is not None:
        return 200

    # Just make sure it exists
    try:
        B.objects.get(id=b_id)
    except B.DoesNotExist:
        return 404

    try:
        c = C.objects.get(id=a.c_id)
    except C.DoesNotExist:
        return 404

    with transaction.atomic():
        a.assign_b(b_id)
        c.add_b(b_id)
# models.py
class A(models.Model):
    # ...
    def assign_b(self, b_id: str):
        # Oops some duplication, whatever
        if "b_id" in self and self.b_id is not None:
            return
        self.b_id = b_id
        self.save()

class C(models.Model):
    # ...
    def add_b(self, b_id: str):
        if "b_ids" in self and self.b_ids is not None:
            self.b_ids += b_id
        else:
            self.b_ids = [b_id]
        self.save()

Beautiful.

The amount of code increased slightly.

The amount of complexity hasn't reduced.

But now we're more properly encapsulated, y'know?

Technically Python doesn't have private methods and private classes, but in this way we at least let the model control the logic of how it expects to work and how it expects its logic to be modified.

It's true that now the code is harder to read, you have to jump around and just know that the self.save() will happen inside of these methods. But again, encapsulation is a clear win in this case.


Now, this code is a lie.

It's not segregated enough so it cannot be.

A and C are actually in two different Domains inside of our Onion Architecture (AKA Hexagonal Architecture, or Clean Architecture, all every similar). And the way you communicate between these layers in an Event-driven Architecture is of course by events!

So here's what we need to do.

  • Add events
  • Separate these models into two different Django apps
  • Pass information between domains using a special "integration event"
  • Save while handling the event, as part of handling the event

Actually this is where my brilliant Python code breaks down because Python doesn't even allow this, because circular dependency graphs are not possible in Python due to execution order.

But for the sake of argument, so you can see how understandable and easy to read and maintain this code is, I give you some theoretical Python code.

BTW I have really worked on codebases this brilliant.

# views.py
def assign_b_to_a_and_c(a_id: str, b_id: str):
    try:        a = A.objects.get(id=a_id)
    except A.DoesNotExist:
        return 404

    # Already done, skip
    if "b_id" in a and a.b_id is not None:
        return 200

    # Just make sure it exists
    try:
        B.objects.get(id=b_id)
    except B.DoesNotExist:
        return 404

    # Wow this code is so simple and minimal!
    with transaction.atomic():
        a.assign_b(b_id)
        a.save()
# A/events.py
class AUpdatedEvent():
    a: A

    def __init__(self, a: A):
        self.a = A


class AUpdatedIntegrationEvent():
    a: A

    def __init__(self, a: A):
        self.a = A
# A/handlers.py
class AUpdatedEventHandler():
    def handle(self, event: AUpdatedEvent):
        # Imagine this function exists
        push_to_async_event_bus(AUpdatedIntegrationEvent(event))
# A/models.py
class A(models.Model):
    # ...
    def assign_b(self, b_id: str):
        if "b_id" in self and self.b_id is not None:
            return
        self.b_id = b_id
        self.add_domain_event(AUpdatedEvent(self))
# C/handlers
class AUpdatedIntegrationEventBus():
    def handle(self, event: AUpdatedIntegrationEvent):
        try:
            c = C.objects.get(id=a.c_id)
        except C.DoesNotExist:
            return 404
        c.add_b(a.b_id)
        c.save()
# C/models.py
class C(models.Model):
    # ...
    def add_b(self, b_id: str):
        if "b_ids" in self and self.b_ids is not None:
            self.b_ids += b_id
        else:
            self.b_ids = [b_id]

If you can't follow along with this easy to follow code, I'm sorry but, skill issues.

Of course we now violated the idea of simple, and of transactions, and all these things that are actually very useful to us. But in exchange: it's more maintainable and it's segregated and decentralized!


I hope you've realized that this article is a criticism of this kind of code, not a love letter.

This kind of code can only be produced when you've forsaken how the computer actually works (procedurally), and you're enamoured with the idea that code must be "Clean" (capital C, Uncle Bob), OOP, SOLID and so on.

That performance is for the hardware to handle, not for the engineer to handle.

And you've adopted the idea that somehow complexity is simpler to understand and more maintainable than simplicity.

Some people will genuinely argue that the final version is more maintainable.


Now let's get to the actual point of this article.

First, in case you're wondering, this really happened to me, recently even. In the end the transaction was thrown out the window. I'm sure that won't cause any issues ever.

But I don't want anyone to get distracted by the code.

Yes the code these ideologies produce is hard to follow.

But there are good ideologue programmers that will produce good code. But it will be despite the ideology, not because of it.

There are good and bad programmers anywhere and everywhere.

But these ideologies encourage this kind of code indirect and hard to follow code.

They make the programmer's job harder, and the computer's.


Don't think I will leave you just on the negatives!

If you want an alternative, I recommend you slowly load yourself up on the following:

If this post is your first introduction to this idea, welcome!

The name of the idea is Data-Oriented Design as described by Mike Acton, not by Stoyan Nikolov who has a totally different concept.

Love it or hate it, hello to you too.

I sincerely hope some day the software engineering craft can come to think of this as common sense, instead of the excessive accidental complexity we consider normal nowadays.

P.S. if you want to keep the OOP and Clean mindset, then I highly encourage you to at least read Code Complete 2, it is a much more useful resource on the subject of how to actually program, compared to Clean Code.

]]>
I've removed AI from my workflow https://greduan.com/blog/2024/06/30/ive-removed-ai-from-my-workflow Sun, 30 Jun 2024 00:00:00 +0000 2024-06-30-ive-removed-ai-from-my-workflow I removed Copilot and ChatGPT (and other built-in editor AI assistants) from my workflow.

Why would I do that? Besides the fact that the generated code is often trash.

Quite simply, it was quickly making me lazy.

I wasn't engineering anymore.

And I became aware of the real legal risks of using generative AI to do creative work. And programming is creative work.

For more on that check The Intelligence Illusion by Baldur Bjarnason.

Also it feels a bit weird to know that AI is deeply flawed and can easily spit out any amount of BS, and yet use it on a daily basis and then charge my clients for it.

Now the only way in which ChatGPT contributes is by me asking it some questions from time to time to understand a new concept in a basic form.

I now rely much more on two tools:

I feel like I'm thinking and using my mind much more now, which is a great positive as an engineer.

]]>
My largest regret https://greduan.com/blog/2024/06/20/my-largest-regret Thu, 20 Jun 2024 00:00:00 +0000 2024-06-20-my-largest-regret "I worked too much" is a common regret on elder's deathbeds, but what about "I worked too little"?

My personal regret is going to be that. I worked too little during the first half of my 20s.

It's a cliche: but getting complacent is not nice, in retrospect.

After I got my first job, I stopped learning with the fervor I learnt with for the first 5 years of my software engineering journey.

I didn't keep pushing at my job, I didn't look for opportunities, I wasted my free time on YouTube and forums.

I burnt out, due to simply doing the soul sucking activity of wasting time at work instead of working.

I turned that around over the last couple years by simply taking life seriously.

  1. I started a side business with a good friend, which we've been working on for 2 years now, this has kept me learning new technical skills and new soft skills, particularly around project management.

  2. I spend a couple hours a week on a passion side project. Just proving to myself that writing simple, high quality software IS possible. And pushing the limit of my skills.

  3. I throw my everything at my day job. Maybe they "don't pay enough". Maybe they "don't deserve it". But throwing my everything allows me to keep developing myself, and prevent burnout.

If you're interested in this subject I suggest you read "Linchpin: Are You Indispensable" by Seth Godin. My best friend and mentor recommended it to me, and it's now one of my favorite books.

]]>
The only way forward for developers https://greduan.com/blog/2024/05/02/the-only-way-forward-for-developers Thu, 02 May 2024 00:00:00 +0000 2024-05-02-the-only-way-forward-for-developers Recently came across two posts, I'm a programmer and I'm stupid and The one about the web developer job market, one touching on simply KISS, and the other about the current web dev job market, and how AI is affecting it.

These two posts segway into a thought I've been having.

I think the only devs that will have a good job going forward will be those that can deliver pure value, no fluff, making money for real businesses.

To do that quickly and accurately, you need to be lean and only use working tech, no fluff. Like the "stupid programmer" post says.

Your users don't care if it's OOP or if it's a switch case with 1000 cases. They couldn't care less. (See Undertale's dialogue switch statement with all of the game's dialogue being put in there.)

On one project right now I'm dealing with an over engineered architecture, design patterns, OOP, DDD, blah blah. It takes forever to write anything and make money with it. It will take 6-8 months for a medium-sized feature.

The moment a real lean competitor comes along that is business savvy, it's game over.

Versus at another project, in 60 hours I've been able to deliver a full working product that saves our customer money and time and is ready to have features added to it.

The only difference is the mindset of the architecture and the devs implementing it.

]]>
Python project setup https://greduan.com/blog/2023/12/03/python-project-setup Sun, 03 Dec 2023 00:00:00 +0000 2023-12-03-python-project-setup I've been playing around with my Python project setup a little bit, and I have found the following setup to be quite convenient and comfortable.

I make use of the Makefile as a sort of script runner (not what it's meant for btw).

I use a Django project as an example, that's using Tailwind and Svelte as well.

(Please note that for Makefiles, you need to indent with tabs, spaces are not syntactically valid.)

Setup script, dependency management with pip-tools

I have a make setup script, which I only run once, or when I delete the venv/ directory.

Then for dependency management I make use of pip-tools which is basically pip-compile+pip-sync.

You use pip-compile to take in a requirements file with the dependencies you INTEND to have, which it then figures out what the dependencies you actually need, are, in order to fulfill that intention.

Then you use pip-sync to make the virtualenv only contain the libraries that you originally intended.

For pip-compile I use a requirements.in file (like requirements.txt but much smaller and stable).

That generates a requirements.txt that reflects what the requirements.in actually needs.

This is useful when you want to uninstall dependencies. Because then you just remove them from the requirements.in, you run pip-compile and then pip-sync and boom, you only have what you need.

In the Makefile I use make update for pip-compile and make install for pip-sync.

make run script

This depending on the project I may or may not have it.

For this particular project you can see how it turned out below. I actually implement a trick so that I can run multiple commands at once, and then when I Ctrl-c it actually exits all the commands at once.

The Makefile and helper scripts

Finally here's how the Makefile looks and the extra helper scripts.

# Only meant to be run once when setting up the project locally
.PHONY: setup
setup:
	pyenv exec python -m venv venv && . venv/bin/activate && pip install --upgrade pip && python -m pip install pip-tools
	chmod +x ./python ./manage

# Run every time the requirements.in changes
.PHONY: update
update:
	. venv/bin/activate && pip-compile --generate-hashes requirements.in

# Run every time the requirements.txt changes
.PHONY: install
install:
	. venv/bin/activate && pip-sync --pip-args '--no-deps' && ./manage tailwind install
	cd svelte && pnpm install

# Runs all the build and run processes in parallel
.PHONY: run
run:
	# Trick to run multiple commands in parallel and kill them all at once
	(trap 'kill 0' SIGINT; make runserver & make svelte & make tailwind & wait)

.PHONY: runserver
runserver:
	./manage runserver

.PHONY: svelte
svelte:
	cd svelte && pnpm run watch

.PHONY: tailwind
tailwind:
	./manage tailwind start

python file:

#!/usr/bin/env sh  
  
set -e  
  
. venv/bin/activate  
  
if [ $1 = 'sh' ]; then  
    # if the first arg is sh, like in our supervisorctl conf file, skip it  
    shift 1  
fi  
  
python $@

manage file:

#!/usr/bin/env bash  
  
set -e  
  
. venv/bin/activate  
  
python manage.py $@
]]>
Using pyenv on DigitalOcean Ubuntu 22.04 https://greduan.com/blog/2023/12/02/pyenv-digitalocean-ubuntu-22-04 Sat, 02 Dec 2023 00:00:00 +0000 2023-12-02-pyenv-digitalocean-ubuntu-22-04 If you're running Python on Ubuntu, you might install pyenv with the following command:

curl https://pyenv.run | bash

And then you might run pyenv install 3.11.4 to install the Python version you need.

And you might run into an error like the following!:

Downloading Python-3.11.4.tar.xz...
-> https://www.python.org/ftp/python/3.11.4/Python-3.11.4.tar.xz
Installing Python-3.11.4...
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/home/demo/.pyenv/versions/3.11.4/lib/python3.11/bz2.py", line 17, in <module>
    from _bz2 import BZ2Compressor, BZ2Decompressor
ModuleNotFoundError: No module named '_bz2'
WARNING: The Python bz2 extension was not compiled. Missing the bzip2 lib?
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/home/demo/.pyenv/versions/3.11.4/lib/python3.11/curses/__init__.py", line 13, in <module>
    from _curses import *
ModuleNotFoundError: No module named '_curses'
WARNING: The Python curses extension was not compiled. Missing the ncurses lib?
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/home/demo/.pyenv/versions/3.11.4/lib/python3.11/ctypes/__init__.py", line 8, in <module>
    from _ctypes import Union, Structure, Array
ModuleNotFoundError: No module named '_ctypes'
WARNING: The Python ctypes extension was not compiled. Missing the libffi lib?
Traceback (most recent call last):
  File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'readline'
WARNING: The Python readline extension was not compiled. Missing the GNU readline lib?
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/home/demo/.pyenv/versions/3.11.4/lib/python3.11/ssl.py", line 100, in <module>
    import _ssl             # if we can't import it, let the error propagate
    ^^^^^^^^^^^
ModuleNotFoundError: No module named '_ssl'
ERROR: The Python ssl extension was not compiled. Missing the OpenSSL lib?

Please consult to the Wiki page to fix the problem.
<https://github.com/pyenv/pyenv/wiki/Common-build-problems>


BUILD FAILED (Ubuntu 22.04 using python-build 2.3.24)

Inspect or clean up the working tree at /tmp/python-build.20230812213442.20196
Results logged to /tmp/python-build.20230812213442.20196.log

Last 10 log lines:
        LD_LIBRARY_PATH=/tmp/python-build.20230812213442.20196/Python-3.11.4 ./python -E -m ensurepip \
                $ensurepip --root=/ ; \
fi
Looking in links: /tmp/tmph5_fnlth
Processing /tmp/tmph5_fnlth/setuptools-65.5.0-py3-none-any.whl
Processing /tmp/tmph5_fnlth/pip-23.1.2-py3-none-any.whl
Installing collected packages: setuptools, pip
  WARNING: The scripts pip3 and pip3.11 are installed in '/home/demo/.pyenv/versions/3.11.4/bin' which is not on PATH.
  Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
Successfully installed pip-23.1.2 setuptools-65.5.0

If you manage to fix that one, you'll be bothered with the next one, and the next one.

So here we go, I will share with you all the libraries you need to install.

sudo apt-get install build-essential libbz2-dev libncurses5-dev libncursesw5-dev libffi-dev libreadline-dev libssl-dev libsqlite3-dev liblzma-dev zlib1g-dev

But then in that case you might run into the following errors when you run sudo apt-get update, because you're using an older version of Ubuntu!

Hit:1 http://old-releases.ubuntu.com/ubuntu hirsute-security InRelease
Get:2 https://download.docker.com/linux/ubuntu hirsute InRelease [48.9 kB]
Ign:3 http://mirrors.digitalocean.com/ubuntu hirsute InRelease
Ign:4 http://mirrors.digitalocean.com/ubuntu hirsute-updates InRelease
Hit:5 https://repos-droplet.digitalocean.com/apt/droplet-agent main InRelease
Ign:6 http://mirrors.digitalocean.com/ubuntu hirsute-backports InRelease
Err:7 http://mirrors.digitalocean.com/ubuntu hirsute Release
  404  Not Found [IP: 172.67.148.71 80]
Err:8 http://mirrors.digitalocean.com/ubuntu hirsute-updates Release
  404  Not Found [IP: 172.67.148.71 80]
Err:9 http://mirrors.digitalocean.com/ubuntu hirsute-backports Release
  404  Not Found [IP: 172.67.148.71 80]
Reading package lists... Done
E: The repository 'http://mirrors.digitalocean.com/ubuntu hirsute Release' no longer has a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
E: The repository 'http://mirrors.digitalocean.com/ubuntu hirsute-updates Release' no longer has a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
E: The repository 'http://mirrors.digitalocean.com/ubuntu hirsute-backports Release' no longer has a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.

To fix that, you update all of your sources list to use https://old-releases.ubuntu.com/ubuntu/:

sudo vi /etc/apt/sources.list

The reference to that fix is: https://www.digitalocean.com/community/questions/apt-update-not-working-on-ubuntu-21-04

Hope this saves you some time as it would've for me.

]]>
Basic security in Python Litestar projects (bonus HTMX CSRF config) https://greduan.com/blog/2023/12/01/basic-security-python-litestar-bonus-htmx-csrf Fri, 01 Dec 2023 00:00:00 +0000 2023-12-01-basic-security-python-litestar-bonus-htmx-csrf In Litestar projects, while the batteries for security features come in the package, you still have to insert them yourself. You need to configure security things yourself.

I'll walk you quickly through the basics:

  • CSRF
  • CORS
  • Allowed hosts

CSRF

https://docs.litestar.dev/2/usage/middleware/builtin-middleware.html#csrf

How I do it is as follows:

  1. I configure a CSRF_SECRET env var
  2. I load those env vars using dotenv (python-dotenv pip package)
  3. Check during application initialization if CSRF_SECRET has been defined or not, if it hasn't then I exit immediately
  4. Configure a CSRFConfig middleware for Litestar
  5. Use that middleware in Litestar
  6. (Bonus) Configure HTMX to include the CSRF token in all of its requests in a header using the csrf_token() template function, which injects the token for the current request into the template when called
csrf_secret = os.environ.get('CSRF_SECRET', None)
if csrf_secret is None:
    raise ValueError('CSRF_SECRET environment variable must be set.')
# The default cookie name is 'csrftoken', but we want to use 'x-csrftoken' to
# avoid conflicts with something else (don't know what)
csrf_config = CSRFConfig(secret=csrf_secret, cookie_name='x-csrftoken')

app = Litestar(
	# ...
    csrf_config=csrf_config,
	# ...
)
<script>
  document.body.addEventListener('htmx:configRequest', function(evt) {
    evt.detail.headers['x-csrftoken'] = '{{ csrf_token() }}';
  });
</script>

Note that CSRFConfig actually allows you to configure what header it should look at to check if the front end is properly sending a CSRF token. Take a look at the docs linked above for more details on that.

CORS

https://docs.litestar.dev/2/usage/middleware/builtin-middleware.html#cors

To include CORS security measures it is much simpler.

  1. Configure an ALLOW_ORIGIN variable (domain URLs, separated by comma), e.g. "https://greduan.com"
  2. Configure a CORSConfig middleware
  3. Use that middleware in Litestar
allow_origin = os.environ.get('ALLOW_ORIGIN', None)
if allow_origin is None:
    raise ValueError('ALLOW_ORIGIN environment variable must be set.')
cors_config = CORSConfig(allow_origins=allow_origin.split(','))

app = Litestar(
	# ...
    cors_config=cors_config,
	# ...
)

Allowed hosts

https://docs.litestar.dev/2/usage/middleware/builtin-middleware.html#allowed-hosts

Once again very simple, Almost the same as with CORS.

  1. Configure an ALLOWED_HOSTS env var with domains (no ports!), separated by comma, e.g. "127.0.0.1,localhost"
  2. Set up the middleware with the allowed hosts
  3. And use that in Litestar
allowed_hosts = os.environ.get('ALLOWED_HOSTS', None)
if allowed_hosts is None:
    raise ValueError('ALLOWED_HOSTS environment variable must be set.')
allowed_hosts = allowed_hosts.split(',')

app = Litestar(
	# ...
    allowed_hosts=AllowedHostsConfig(allowed_hosts=allowed_hosts),
	# ...
)

Putting it all together

The environment variables, as an example for localhost running on port 8000:

ALLOW_ORIGIN='127.0.0.1:8000'
CSRF_SECRET="boom boom boom boom, I want you in my room, let's spend the night together, tonight until forever!"
ALLOWED_HOSTS='127.0.0.1,localhost'

And all of the Python code:

allow_origin = os.environ.get('ALLOW_ORIGIN', None)
if allow_origin is None:
    raise ValueError('ALLOW_ORIGIN environment variable must be set.')
cors_config = CORSConfig(allow_origins=allow_origin.split(','))

csrf_secret = os.environ.get('CSRF_SECRET', None)
if csrf_secret is None:
    raise ValueError('CSRF_SECRET environment variable must be set.')
# The default cookie name is 'csrftoken', but we want to use 'x-csrftoken' to
# avoid conflicts with something else (don't know what)
csrf_config = CSRFConfig(secret=csrf_secret, cookie_name='x-csrftoken')

allowed_hosts = os.environ.get('ALLOWED_HOSTS', None)
if allowed_hosts is None:
    raise ValueError('ALLOWED_HOSTS environment variable must be set.')
allowed_hosts = allowed_hosts.split(',')

app = Litestar(
	# ...
    cors_config=cors_config,
    csrf_config=csrf_config,
    allowed_hosts=AllowedHostsConfig(allowed_hosts=allowed_hosts),
	# ...
)
]]>
Shallow thoughts are cheaper for experts https://greduan.com/blog/2023/11/27/shallow-thoughts-are-cheaper-for-experts Mon, 27 Nov 2023 00:00:00 +0000 2023-11-27-shallow-thoughts-are-cheaper-for-experts Chess players have studied and observed the same plays and patterns thousands of times over. Running through the next possible 5 moves is cheaper for them, because half of them they can already see the moment they see the position.

In a similar way, it is likely that when a programmer looks at a problem proposition (a ticket), the main patterns behind that pop into their mind immediately.

What that affords is overall deeper thought.

The shallow thoughts don't take much mental capacity.

Meanwhile for a novice the opposite is true. Shallow thoughts do take their effective mental capacity, because shallow thoughts ARE deep thoughts.

But the perceived effort is the same.

This is how you get the Dunning Kruger effect.

]]>
Throw early for programmer errors https://greduan.com/blog/2023/08/27/throw-early-for-programmer-errors Sun, 27 Aug 2023 00:00:00 +0000 2023-08-27-throw-early-for-programmer-errors Functions will normally, or at least sometimes, guard against invalid inputs. That might be inputs that are outside of the valid range, or simply checking the inputs exist and so on.

Sometimes the input is hardcoded, the programmer says what the input is, explicitly.

It's not a value from the system that's being passed around.

In those cases, I argue, you should throw early.

Giving the programmer a chance to see it quickly, and fix it quickly, instead of having to debug.

The following diff might illustrate the point. This is a real diff after I hunted the bug down and finally found it.

The yearKey variable was set to a name for which there was no Control in the Angular FormGroup, therefore later down in the logic, the validator never actually passes because the value is always undefined (control?.value == null always of course).

diff --git a/client/projects/common/src/lib/model/birthday-validator.ts b/client/projects/common/src/lib/model/birthday-validator.ts
index e535ccad..aab96d20 100644
--- a/client/projects/common/src/lib/model/birthday-validator.ts
+++ b/client/projects/common/src/lib/model/birthday-validator.ts
@@ -35,6 +35,13 @@ export function byYearValidator(
     if (!form) {
       return null;
     }
+    const controls = Object.keys(form.controls);
+    if (!controls.includes(onlyYearFieldKey) || !controls.includes(yearKey)) {
+      // We throw because it's a programmer error, better to catch and fix early
+      throw new Error(
+        `Form does not contain '${onlyYearFieldKey}' FormControl or '${yearKey}' FormControl`,
+      );
+    }
     const onlyYear = form.controls[onlyYearFieldKey]?.value;
     const year = form.controls[yearKey]?.value;
 
@@ -56,6 +63,13 @@ export function byBirthdayValidator(
     if (!form) {
       return null;
     }
+    const controls = Object.keys(form.controls);
+    if (!controls.includes(onlyYearFieldKey) || !controls.includes(birthdayKey)) {
+      // We throw because it's a programmer error, better to catch and fix early
+      throw new Error(
+        `Form does not contain '${onlyYearFieldKey}' FormControl or '${birthdayKey}' FormControl`,
+      );
+    }
     const onlyYear = form.controls[onlyYearFieldKey]?.value;
     const birthday = form.controls[birthdayKey]?.value;
]]>
Svelte v4 in Django using Webpack https://greduan.com/blog/2023/08/07/svelte-v4-in-django-using-webpack Mon, 07 Aug 2023 00:00:00 +0000 2023-08-07-svelte-v4-in-django-using-webpack December last year I wrote a blog post for using Svelte components in a Django app using Rollup which was cool, but with the latest version of Svelte, Svelte 4, it stopped working.

This blog post adds on to that one.

It always produces code like this:

(function (internal) {
  // ...
})(internal);

Where of course internal isn't defined in the global scope. So that approach no longer works.

So I come back with a solution using Webpack.

It supports TypeScript and Svelte. It extracts the basic CSS styles as well, from the <style></style> tags in your Svelte files.

Tailwind

It does not compile Tailwind styles.

Your Django setup, will compile these for you, just add your **/*.svelte files to the Tailwind config.

So in my config it looks something like:

module.exports = {
    content: [
        ...
        '../../**/*.svelte',
        ...
    ],
    theme: {
        ...
    },
    plugins: [
        ...
    ],
}

Although that setup doesn't understand @apply calls within the <style></style> tags in your Svelte components. For that you will have to update the Webpack config.

The Webpack setup

The webpack.config.js:

const path = require('path');  
const sveltePreprocess = require('svelte-preprocess');  
  
module.exports = {  
  mode: 'development',  
  devtool: 'eval-source-map',  
  entry: './src/demo.ts',  
  module: {  
    rules: [  
      {  
        test: /\.tsx?$/,  
        use: 'ts-loader',  
        exclude: /node_modules/,  
      },  
      {  
        test: /\.(html|svelte)$/,  
        use: {  
          loader: 'svelte-loader',  
          options: {  
            preprocess: sveltePreprocess(),  
          },  
        },  
      },  
      {  
        // required to prevent errors from Svelte on Webpack 5+  
        test: /node_modules\/svelte\/.*\.mjs$/,  
        resolve: {  
          fullySpecified: false  
        }  
      },  
    ],  
  },  
  resolve: {  
    extensions: ['.tsx', '.ts', '.js', '.svelte'],  
    mainFields: ['svelte', 'browser', 'module', 'main'],  
    conditionNames: ['svelte', 'browser'],  
    alias: {  
      svelte: path.resolve('node_modules', 'svelte/src/runtime'),  
    },  
  },  
  output: {  
    path: path.resolve(__dirname, 'static', 'js', 'svelte'),  
    filename: 'demo.js',  
    chunkFilename: 'demo.[id].js',  
  },  
};

Of course adjust for your own needs.

The tsconfig.json I came to:

{  
  "compilerOptions": {  
    "outDir": "./static/js/svelte/",  
    "noImplicitAny": true,  
    "module": "es6",  
    "target": "es6",  
    "allowJs": true,  
    "moduleResolution": "node",  
    "types": [  
      "svelte"  
    ]
  },  
  "extends": "@tsconfig/svelte/tsconfig.json"  
}

And the required dependencies, I share with you the pnpm command:

pnpm i -D @tsconfig/svelte svelte svelte-loader svelte-preprocess ts-loader typescript webpack webpack-cli 

To run this of course you'd use simply npx webpack or webpack in one of your scripts, I suggest you use --watch, so I have something like this in my package.json:

"scripts": {  
  "build": "webpack",  
  "watch": "webpack --watch"  
}

I hope that's useful, and it brings some value for you.

]]>
Twitter 3-legged OAuth with Django using Tweepy, for Twitter bots https://greduan.com/blog/2023/08/02/twitter-3-legged-oauth-with-django-using-tweepy-for-twitter-bots Wed, 02 Aug 2023 00:00:00 +0000 2023-08-02-twitter-3-legged-oauth-with-django-using-tweepy-for-twitter-bots A reference blog post on how to gain access to a Twitter account, from a Django application, for use in a Twitter bot either in Django or in some other system.

This is true as of June 2023. I hope it doesn't change, because I don't like complex OAuth flows with blackbox errors.

Necessary Twitter auth config

TWITTER_API_BEARER_TOKEN=''
TWITTER_CONSUMER_API_KEY_SECRET=''
TWITTER_CONSUMER_API_KEY=''

The TWITTER_API_BEARER_TOKEN is optional, it's for being able to use certain APIs. Use it if you need it.

The other two keys you will need fort he OAuth to be able to take place.

In the Twitter Developer Portal, you find them under your project's "Keys and tokens", named "Consumer Keys", the "API Key and Secret".

Think of these as the user name and password that represents your App when making API requests.

Suggested Django model

class YourModel(models.Model):
    twitter_oauth_token = models.CharField(max_length=256, null=True)
    twitter_oauth_token_secret = models.CharField(max_length=256, null=True)
    twitter_access_token = models.CharField(max_length=256, null=True)
    twitter_access_token_secret = models.CharField(max_length=256, null=True)

    def get_tweepy_client(self):
        return tweepy.Client(
            bearer_token=os.environ.get('TWITTER_API_BEARER_TOKEN'),
            consumer_key=os.environ.get('TWITTER_CONSUMER_API_KEY'),
            consumer_secret=os.environ.get('TWITTER_CONSUMER_API_KEY_SECRET'),
            access_token=self.twitter_access_token,
            access_token_secret=self.twitter_access_token_secret,
        )

twitter_oauth_token and twitter_oauth_token_secret

During the OAuth process, you will define the twitter_oauth_token and twitter_oauth_token_secret in the model. You need to store somewhere persistent between requests, as you will need it to be present between two different requests.

In the end, how you keep this persistent from one request to the other is unimportant, I decided to do it through the model as it's a simple solution.

twitter_access_token and twitter_access_token_secret

These store the final auth keys you get from Twitter to forever represent this user via your Twitter bot.

Django endpoints (urls.py)

urlpatterns = [
    path('authenticate_twitter', login_required(views.authenticate_twitter), name='authenticate_twitter'),
    path('authenticate_twitter_callback/', login_required(views.authenticate_twitter_callback), name='authenticate_twitter_callback'),
]

Django endpoints (views.py)

def authenticate_twitter(request):
    model_id = request.GET.get('model_id')
    oauth1_user_handler = get_oauth1_user_handler(model_id)
    url = oauth1_user_handler.get_authorization_url(signin_with_twitter=True)
    model = YourModel.objects.get(id=model_id)
    model.twitter_oauth_token = oauth1_user_handler.request_token['oauth_token']
    model.twitter_oauth_token_secret = oauth1_user_handler.request_token['oauth_token_secret']
    model.save()
    return redirect(url)


def authenticate_twitter_callback(request):
    model_id = request.GET.get('model_id')
    # oauth_token = request.GET.get('oauth_token')
    oauth_verifier = request.GET.get('oauth_verifier')
    oauth1_user_handler = get_oauth1_user_handler(model_id)
    access_token, access_token_secret = oauth1_user_handler.get_access_token(oauth_verifier)

    model = YourModel.objects.get(id=model_id)
    model.twitter_access_token = access_token
    model.twitter_access_token_secret = access_token_secret
    model.save()

    return redirect('/yourapp/' + str(model_id))


def get_oauth1_user_handler(model_id: str):
    model = YourModel.objects.get(id=model_id)
    oauth1_user_handler = tweepy.OAuth1UserHandler(
        consumer_key=os.environ.get('TWITTER_CONSUMER_API_KEY'),
        consumer_secret=os.environ.get('TWITTER_CONSUMER_API_KEY_SECRET'),
        callback=f'{os.environ.get("HOST")}/yourapp/authenticate_twitter_callback/?model_id={model_id}'
    )
    if model.twitter_oauth_token_secret is not None and model.twitter_oauth_token_secret != '':
        oauth1_user_handler.request_token = {
            'oauth_token': model.twitter_oauth_token,
            'oauth_token_secret': model.twitter_oauth_token_secret
        }
    return oauth1_user_handler

authenticate_twitter_get(request)

When the user visits the URL, they will be redirected to Twitter for the OAuth interaction for your bot.

Before redirecting the user, we save the twitter_oauth_token and twitter_oauth_token_secret because we will need it.

authenticate_twitter_callback(request)

It finishes the authentication process, by getting the access tokens and storing them in the model.

These can now be used to instantiate a Tweepy client that has access to the Twitter API and can manipulate stuff on behalf of the account that accepted the bot's OAuth.

get_oauth1_user_handler(model_id: str)

Just a helper function. You can use it or not, up to you.

It sets the oauth_token and oauth_token_secret to be the twitter_oauth_token and twitter_oauth_token_secret if it exists in the model.

This complexity could've been in the authenticate_twitter_callback(request) as well.

This step is not very well documented by Tweepy. It somehow assumes it all happens in the same context. It does mention it, but it's not very clear. I hope this alone saves you a couple hours.

Nota bene: In my code I use the HOST environment variable, you can hardcode this or use whatever other medium you have to define the host of the server.

Nota bene 2: The callback URL needs to be configured in your Twitter bot's config to be a valid "Callback / Redirect URI".

The user's entrypoint

Simply send them to the URL for the authenticate_twitter(request) endpoint, and that starts the interaction for the user.

In conclusion

The integration itself is rather straightforward, you just need to play around until you magically come across the correct solution of what values to pass where and which states to save and pass around and which not :)

I am a bit salty at how long it took to figure this out.

Resources

]]>
HTMX kills most single page applications https://greduan.com/blog/2023/06/17/htmx-kills-most-single-page-applications Sat, 17 Jun 2023 00:00:00 +0000 2023-06-17-htmx-kills-most-single-page-applications Language in the title is bait, let's get that out there right away.

But.

It is true that for a significant amount of cases where you see SPAs used, HTMX could easily replace whole framework, leading to a simpler project, but maintaining the snappy, responsive experience we know from SPAs.

I've had the pleasure of working with htmx in some recent projects, and indeed it removes any need for a front end SPA.

It's basically a rather expressive jQuery, especially when mixed with hyperscript.

A lot of times we implement a SPA just because we need an interactive experience and we don't want to write jQuery and keep track of state in a weird way.

htmx+hyperscript gets rid of that need. The interactivity and state tracking can be done with the back end. In the end you have a situation where the source of truth, the back end, is also the one that gets to dictate what gets rendered.

Overall it leads to a much simpler application.

Of course if you have the real need for a VERY interactive experience, difficult application-wide state tracking, then a SPA is the correct solution.

If you just need some interactivity while developing your app, a SPA is overkill and overcomplicated.

]]>
Book Summary: Don't Make Me Think Revisited A Common Sense Approach to Web and Mobile Usability by Steve Krug https://greduan.com/blog/2023/06/16/book-summary-dont-make-me-think Fri, 16 Jun 2023 00:00:00 +0000 2023-06-16-book-summary-dont-make-me-think These are my private notes about this book. Hopefully they are interesting and helpful to you as well.

Navigation

  • Main purpose of navigation is to give the user a sense of where they are.
  • Every page must have a name, and that name must match the navigation's page as closely as possible.
  • Users jump into random links in the middle of the website. Navigation should be there to guide them as to their current location.
    • This is achieved primarily through very clear You Are Here indicators.
    • The You Are Here factor cannot be subtle, it must not be subtle.
  • Navigation should be scannable, and give the user a chance to find what they are looking for with only good guesses.
  • If using breadcrumbs, only make the last one bold.
  • There are a couple exceptions where navigation can get out of the way:
    • Forms
    • Other processes where you don't want to distract the user, and it is unlikely they will leave the process before being finished.
  • Preserve the distinction between visited and unvisited links. Gives your users a sense of navigation.

Homepage

  • "What the site is", conveying "the big picture"
  • Should answer the following questions:
    • What is this?
    • What do they have here?
    • What can I do here?
    • Why should I be here and not somewhere else?
  • In addition, a good indicator of "Where do I start?" is important

Plausible reasons to not make it clear (invalid):

  1. We don't need to, it's obvious
  2. After people have seen the explanation once, they'll find it annoying
  3. Anybody who really needs our site will know what it is
  4. That's what our advertising is for

Usability tests

  • "The average user" is a myth. All web use is unique and idiosyncratic.
  • We nonetheless treat how we ourselves use the web as religion. And can lead to unproductive arguments about how to best implement a UX.
  • The antidote is usability tests. Which will answer the very specific question of UX in a particular scenario.
    • "Does this dropdown, with these items, and this wording, in this context, on this page, create a good experience for most people who are likely to use this site?"
  • Usability tests can answer that.
  • Guidelines
    • Test early
    • Ideally three users
    • Once a month
    • With a predefined date per month
    • Note down the top usability problems (by how much of a block they cause the user), and tackle those for the following month

Mobile

  • "Managing real estate challenges shouldn’t be done at the cost of usability."

Accessibility

  • Links should have the keywords at the beginning. it’s what blind users scan with. They'll "scan" the first few words and skip to the next link if they don't think it's relevant to them.

Other

There is the concept of how much good will your users hold towards you at any particular moment.

To increase that, be honest with them. Show them things you’d normally suffer by showing them (e.g. zero hidden costs, in fact making the cost apparent), and you gain their trust. Never hide things.

Users don’t mind clicks as long as they have confidence that it takes them to the right place.

Design/write for scanning, not reading. Users don't read, they scan, until they give up with scanning because they can't find what they want to find.

Explain things to users. They don't mind. E.g. an explanation of what the input will be used for.

Further reading

Usability testing: Rocket Surgery Made Easy: The Do-It-Yourself Guide to Finding and Fixing Usability Problems by Steve Krug

Screen readers: Guidelines for Accessible and Usable Web Sites: Observing Users Who Work With Screen Readers by Janice (Ginny) Redish

Accessibility: A Web for Everyone: Designing Accessible User Experiences by Sarah Horton and Whitney Quesenbery

Accessibility: Web Accessibility: Web Standards and Regulatory Compliance by Jim Thatcher

]]>
Adding TailwindCSS to Svelte components in a Django app https://greduan.com/blog/2022/12/26/adding-tailwind-to-svelte-components-in-a-django-app Mon, 26 Dec 2022 00:00:00 +0000 2022-12-26-adding-tailwind-to-svelte-components-in-a-django-app This is a follow-up guide for how to add Tailwind styles to your Svelte components in your Django app. You can also read part one which is about how to add the Svelte components to your Django app in the first place.

We essentially use Rollup for that Svelte setup, so we need to make sure that our Tailwind classes are detected within the Svelte files and the resulting CSS is injected in the JS files generated for the Svelte components.

There is a new guide updated for Svelte v4.

The short version

cd mysite/svelte
pnpm i -D tailwindcss postcss rollup-plugin-postcss autoprefixer

The rollup.config.js file:

const postcss = require('rollup-plugin-postcss');

// ...
{
  plugins: [
    postcss({
      plugins: {
        'tailwindcss/nesting': {},
        tailwindcss: {},
        autoprefixer: {},
      },
    }),
    // ... Svelte
  ],
}
// ...

Got that info from the Tailwind docs, which also include instructions on how to integrate with SCSS, etc., but no instructions for Rollup specifically.

]]>
Using Svelte components in a Django app https://greduan.com/blog/2022/12/22/using-svelte-components-in-a-django-app Thu, 22 Dec 2022 00:00:00 +0000 2022-12-22-using-svelte-components-in-a-django-app I've found guides on how to have SPAs with Django as the back end. And a couple other variations.

But I haven't found a guide on how to integrate individual Svelte components into Django. Not to run the app, but rather to enhance the app.

There is a follow-up guide for how to add Tailwind styles to your Svelte components in your Django app.

There is a new guide updated for Svelte v4.

The short version

django-admin startproject mysite
cd mysite
mkdir svelte
cd svelte
pnpm init
pnpm i svelte
pnpm i -D @rollup/plugin-node-resolve rollup rollup-plugin-svelte
touch rollup.config.js .gitignore

The .gitignore file:

node_modules/
static/

The rollup.config.js file:

const svelte = require('rollup-plugin-svelte');
const resolve = require('@rollup/plugin-node-resolve');

const componentRollupConfig = (src, dest, name) => ({
  input: src,
  output: {
    file: dest,
    format: 'iife',
    name: name,
  },
  plugins: [
    svelte({
      include: 'src/**/*.svelte',
    }),
    resolve({ browser: true }),
  ],
});

module.exports = [
  componentRollupConfig('src/SlimeChat.svelte', 'static/js/svelte/SlimeChat.js', 'SlimeChat'),
];

Your mysite/settings.py file, you need to add to your STATICFILES_DIRS the following value:

STATICFILES_DIRS = [
    ...
    BASE_DIR / 'svelte/static',
    ...
]

Your package.json scripts:

{
  "scripts": {
    "build": "rollup --config",
    "dev": "rollup --config --watch"
  }
}

The example quick usage in your Django template:

{% csrf_token %}
<div id="chat"></div>

{% load static %}
<script src="{% static 'js/svelte/SlimeChat.js' %}"></script>
<script>
  const csrfToken = document.querySelector('input[name="csrfmiddlewaretoken"]').value;
  const app = new SlimeChat({
    target: document.getElementById('chat'),
    props: {
      csrfToken,
      slimeName: '{{ slime.name }}',
      slimeId: '{{ slime.id }}',
    },
  });
</script>

That should get you kickstarted, and you can figure out the details yourself if any are missing, but I believe that's all. Of course you'll need to create that SlimeChat.svelte file 🙂

The longer version

Actually the short version covers most of the points that you need to be aware of, with all the connecting points.

But I wanted to provide some context, explain what's happening in a couple of the steps.

The approach

The idea is to create a single .js file per entry file. Each one of these entries would be an "app", a component instantiated and attached to a DOM element.

rollup.config.js

Essentially what we need is Rollup to generate an individual file per entry. For this we basically need to duplicate the configuration for each file.

In order to duplicate the config, without repeating ourselves, we create a function to generate the same config just with different params (like the entry and output paths).

That's what the componentRollupConfig(src, dest, name) function is for.

The CSRF token

If you want to make any queries to the Django backend, you'll be needing the CSRF token. Since we can't put it in Svelte, we have to generate it via the Django template and then grab that value via JS, and pass that to the Svelte component.

I did that above via the following JS:

const csrfToken = document.querySelector('input[name="csrfmiddlewaretoken"]').value;

Which you can then pass into the Svelte component.

The Svelte component's usage

Actually the component's usage is nothing special.

You need a DOM element to attach the component to, that's this part of the HTML:

<div id="chat"></div>

You need to include the JS file for your component:

<script src="{% static 'js/svelte/SlimeChat.js' %}"></script>

And the Svelte component usage is just:

const app = new SlimeChat({
  target: document.getElementById('chat'),
  props: {
    csrfToken,
    slimeName: '{{ slime.name }}',
    slimeId: '{{ slime.id }}',
  },
});

Of course, I included some Django templating, because I need to grab the data from the back end, but you can do that in any which way you like.

As you can see it's a similar usage to when you are creating a Svelte SPA.

In conclusion

Using Svelte inside a Django app to enhance it instead of replace it, is actually rather simple.

The main points are:

  • Create a folder for the Svelte components
  • Create a Rollup config for each individual component you'll be using
  • Make the statically generated files available to Django
  • Include the file in the HTML
  • Instantiate the component into a DOM element

Very simple if you know what you're going for.

I will write a follow up guide on how to include styles for your Svelte components, including Tailwind styles.

]]>
Managing sync state https://greduan.com/blog/2022/06/29/managing-sync-state Wed, 29 Jun 2022 00:00:00 +0000 2022-06-29-managing-sync-state Whenever your application needs to fetch data, allow the user to manipulate it, and then allow the user to sync it back to the system it fetched it from, there is a simple pattern you can apply to keep track of the state of the data.

Mermaid source

In essence you'd have the following possible states:

  1. Fetched. Fresh off the oven.
  2. Draft. Contains changes but hasn't been synced back.
  3. Synced. The content has been written back to the external system.

Your app might not use the Draft state, in which case you only need states 1 and 3.

Error handling

If there are errors writing back to the external system, the state field of the data doesn't change, but we write back an error to the database.

The front end, if it finds an error, it displays that error.

If there is success syncing back, the error in the database is set to null, to remove the error from the front end.

]]>
Tailwind, using grid-cols-12 instead of mx-auto https://greduan.com/blog/2022/06/27/tailwind-grid-cols-mx-auto Mon, 27 Jun 2022 00:00:00 +0000 2022-06-27-tailwind-grid-cols-mx-auto The problem

You want to layout your content in columns, in such a way that the column stays in the middle, and its size increases reduces as the screen gets smaller, until the screen gets small enough (goes mobile) that the content should take up the full width of the page.

mx-auto

In Tailwind, mx-auto translates to the following:

margin-left: auto;
margin-right: auto;

m stands for margin, x stands for the x (horizontal) axis.

It's an age-old trick to put an object in the middle of the screen, on the X axis, given it has a set width.

This has an issue though.

If you have two bits of content, let's say for example you have your hero header, and your body content, and they're of different widths, then they won't align with each other.

Let's say the hero is smaller than the content, because it's just a short tagline.

Your hero will be in the middle of the page, while your content will be more left towards the window, as it's wider.

A grid layout

The idea of grids is, if you can imagine 12 columns, that go from the left of your website to the right of your website.

And then aligning your content along those 12 columns.

That'd be a grid layout.

You can read online to see a variety of examples of how this looks like. So you can visualize it.

What you actually want

You want your hero's tagline, and your content, both to align on the same "edge" on the left side of the page, regardless of the page size.

This is more visually appealing, and more logical in terms of how or brain processes information, thanks to us being used to print.

What it looks like

mx-auto

grid-cols-12

grid-cols-12

import React from 'react';  
import PropTypes from 'prop-types';  
import classNames from 'classnames';  
  
export const Content = ({ children, className }) => (  
  <div className={classNames('w-full grid grid-cols-12', className)}>  
    <div className="col-span-0  md:col-span-1  lg:col-span-2"></div>  
    <div className="col-span-12 md:col-span-10 lg:col-span-8">{children}</div>  
    <div className="col-span-0  md:col-span-1  lg:col-span-2"></div>  
  </div>  
);  
  
Content.propTypes = {  
  children: PropTypes.node,  
  className: PropTypes.string,  
};

What I'm doing here is:

  • Making sure we take the full width of our parent container with w-full.
  • Defining a basic 12 column grid with grid grid-cols-12.
  • As its content, defining 3 divs, a "left" div, a "middle", content div, and a "right" div.
  • To the left/middle/right columns, giving them breakpoints to take up different amounts of columns depending on the size of the screen. We do this with col-span-n.

You can adjust the specific sizes for your own use case and website, but in my case what I'm doing is basically the following:

By default (mobile), the left columns will take up no space at all, and the content will take up the full width.

Then when we get to the medium screen size, we want the sides to each take up one column width, and the content to take 10 column widths.

When we get to larger screens, we want the sides to take 2 column widths each, and the content to take 8 column widths.

Each of these configurations each add up to 12 columns.

]]>
Generating a Swagger file with ASP.Net Core and generating API code for Angular https://greduan.com/blog/2022/03/20/generating-a-swagger-file-with-aspnet-core-and-generating-api-code-for-angular Sun, 20 Mar 2022 00:00:00 +0000 2022-03-20-generating-a-swagger-file-with-aspnet-core-and-generating-api-code-for-angular If you ever read The Pragmatic Programmer, you'll be familiar with the concept of Code Generation. It has a section dedicated to it.

In order to avoid writing a bunch of front end code over and over, just to reflect the models and endpoints the back end provides, you can generate it all automatically.

This tutorial will be divided in two sections, the back end section and the front end section.

This tutorial will use C# ASP.Net Core for the back end framework, and Angular for the front end.

Back End (ASP.Net Core)

When you generate a new web application project with Rider it actually already includes Swagger for you in the project, using SwashBuckle. But it doesn't generate any files that you can use outside the server's code.

The aim is to generate a swagger.json file, which we will use later for the front end code. AND, we want to generate it automatically, without having to run extra commands.

To achieve this we will generate the swagger.json file at build time.

I'll be targeting framework net6.0, lang version 10. Reason for this is that it's the solution I found to assembly version mismatches.

The commands you need to run are:

# cd MySolution
dotnet new tool-manifest
dotnet tool install SwashBuckle.AspNetCore.Cli

What this will achieve is make the SwashBuckle CLI tools from the context of the solution.

You will want to make a directory for your Swagger-generated files.

mkdir -p swagger/v1

Next, open up your .csproj file and make sure the following lines are present:

<Target Name="OpenAPI" AfterTargets="Build">
    <Exec Command="dotnet swagger tofile --output ./swagger/v1/swagger.yaml --yaml $(OutputPath)$(AssemblyName).dll v1" WorkingDirectory="$(ProjectDir)" />
    <Exec Command="dotnet swagger tofile --output ./swagger/v1/swagger.json $(OutputPath)$(AssemblyName).dll v1" WorkingDirectory="$(ProjectDir)" />
</Target>

What this will achieve is that after your solution builds it will generate a couple Swagger files, swagger.json and swagger.yaml. In reality you only need the JSON version for the purposes of this tutorial.

While you're at it, make sure your Swashbuckle.AspNetCore version is 6.3.0.

And that should be it. If it's working, when you build your project you should find out that you have a swagger.json and swagger.yaml in your swagger/v1/ folder.

Front end (Angular)

We're going to use a project named ng-openapi-gen to generate the front end models and services, based on the swagger.json file. And we'll use chokidar-cli to run ng-openapi-gen automatically whenever the swagger.json file changes.

So first, we configure ng-openapi-gen via the ng-openapi-gen.json file, we put that in our front end project's root dir:

{
  "$schema": "node_modules/ng-openapi-gen/ng-openapi-gen-schema.json",
  "input": "../server/MySolution/MySolution.Web/swagger/v1/swagger.json",
  "output": "src/app/api",
  "ignoreUnusedModels": false
}

And now for the tooling side of things:

yarn add -D ng-openapi-gen chokidar-cli

And to your npm scripts, you need to add:

{
  "swagger": "ng-openapi-gen",
  "swagger:watch": "chokidar '../server/MySolution/MySolution.Web/swagger/v1/swagger.json' -c 'npm run swagger'"
}

Now you can automatically generate front end code, and stop writing the same code over and over.

ng-openapi-gen resulting folders

Misc.

Setting up Angular

Make sure to follow the instructions on ng-openapi-gen to setup Angular properly to use the generated code for the various cases it supports.

Git

My .gitignore includes these lines for the back end:

MySolution.Web/swagger/

The build process fails if the folders don't exist though. So I suggest you also run the following commands, in order to make sure the folder isn't lost:

touch MySolution.Web/swagger/v1/.keepme
git add -f MySolution.Web/swagger/v1/.keepme

And for the front end:

src/app/api/

In conclusion

With the above the back end's code is used as the source for the swagger.json file, and the front end automatically generates code you can use to access those endpoints, and with TypeScript types to go along with it, so you're all typed up.

]]>
Kubernetes RabbitMQ Certificate Revocation List https://greduan.com/blog/2022/02/02/kubernetes-rabbitmq-certificate-revocation-list Wed, 02 Feb 2022 00:00:00 +0000 2022-02-02-kubernetes-rabbitmq-certificate-revocation-list The problem

You have your Kubernetes (k8s) cluster, and you have your RabbitMQ charts. You're protecting access to them via key pair certificates. You now need to revoke access to one of the certificates.

In Kubernetes there is no support for CRLs or anything similar.

RabbitMQ does, however, support them. And it's actually relatively straightforward. But it's very poorly documented. Hopefully this helps out a poor soul.

Requirements

This post assumes you:

  • Have a RabbitMQ setup in your k8s cluster via Bitnami's Helm charts.
  • You already figured out how to revoke a certificate and generate the CRL .pem file.

For reference you can check the following pages:

That means you also have an openssl.cnf file, which has config lines resembling the following:

dir = ca
certificate = $dir/ca-cert.pem
private_key = $dir/ca-key.pem
database = $dir/index.txt
new_certs_dir = $dir/certs
serial = $dir/ca-cert.srl

And your index.txt does indeed mark your certificate as revoked.

Just to get the basics out of the way :)

How to

crl/ folder

$ mkdir crl
$ mv crl.pem crl
$ c_rehash crl
$ ls crl
b0a7999f.r0  crl.pem

We will explain later what this step is for, just know the files b0a7999f.r0 and crl.pem are the same, only the filename differs.

This folder should now be available to the RabbitMQ charts, so it should live under rabbitmq/crl. Note we removed the crl.pem file from that copy of the folder, honestly not sure if that's necessary.

Expiration

Note, a CRL file has a built-in expiration. This means you need to refresh it regularly. Or, with the -crldays flag, extending that expiration date into the far future. For example:

# extended 100 years into the future
openssl ca -gencrl -crldays 36500 -keyfile ca/ca-key.pem -cert ca/ca-cert.pem -out crl/crl.pem -config openssl.cnf

If you don't do this, when it expires RabbitMQ will have trouble connecting ANY clients as the CRL file is considered then invalid or broken.

Mounting the crl/ folder

In the RabbitMQ charts values config, you can use the following:

extraVolumes:
  - name: crl-volume
    secret:
      secretName: rabbitmq-crl

extraVolumeMounts:
  - name: crl-volume
    readOnly: true
    mountPath: "/etc/crl"

extraSecrets:
  rabbitmq-crl:
    b0a7999f.r0: |-
      -----BEGIN X509 CRL-----
      b3JpZXMgbHRkLiBDQRcNMjIwMjAyMTMwNjQ3WhcNMjIwMjA5MTMwNjQ3WjAcMBoC
      daGBapUlbRujU5++5w0bhSmU3+gTNctNTlzpuCklf0an9XCP48DIF8659+apXN6e
      MIIBiTBzMA0GCSqGSIb3DQEBCwUAMCYxJDAiBgNVBAMMG3RyaWFyYyBsYWJvcmF0
      F+9w+IF2iNPfp346kMZuE97ywtlp6LJmeZszd7HxClfU8eDSyj/FMwuerooVzkxQ
      CQC7NRZnlyVLlhcNMjExMjEzMTEwNjM2WjANBgkqhkiG9w0BAQsFAAOCAQEAbqas
      FPUuitY76A8Gt09+GTmayOkQMkgRpBXX/LOkjDdJ2rgjjtgklZsYq/Q6rMUYxj0B
      HP2FasmBULDuAuDPBzDcta3Ih5x6lxE+gkBkm07hE39TV5DH+N99ZrKdz0oiUGeD
      YfYd6Udu313BXjEGuHnItvbsw1JKZdGRclbdMBBEUURV5jB4lu4D8dIkjcjAi8oC
      DvlsMVdazm9A0Ju1BQ==
      -----END X509 CRL-----

Note here I show how it should end up after templating. (Because actually we had trouble doing it through templating so we hardcoded it for now, feel free to do it with templating.)

What you should end up with is that your RabbitMQ pods will now have under /etc/crl the file b0a7999f.r0 available to them.

We have to do it this way because that's how RabbitMQ works with CRLs.

Configuring RabbitMQ

Once again in the RabbitMQ charts values file, you must add the following:

advancedConfiguration: |-
  [
    {rabbit, [
       {ssl_options, [{cacertfile,"/opt/bitnami/rabbitmq/certs/ca_certificate.pem"},
                      {certfile,"/opt/bitnami/rabbitmq/certs/server_certificate.pem"},
                      {keyfile,"/opt/bitnami/rabbitmq/certs/server_key.pem"},
                      {verify,verify_peer},
                      {fail_if_no_peer_cert,false},
                      {crl_check, false},
                      {crl_cache, {ssl_crl_hash_dir, {internal, [{dir, "/etc/crl/"}]}}}]}
     ]}
  ].

Let's walk through that. First, this is basically the config you have available under configuration, so why are we repeating ourselves?

There are some things that cannot be specified via that format, but that we do need to specify. However according to this mailing list post, these are not merged cleanly, the normal config and the advanced config, so we need to specify the ssl_options in full in the advanced config.

We copy the values from the normal config, and then at the end we add the two important config values for us.

{crl_check, true}, when true, enables checking certificates against the CRL we setup above. You can quickly disable this feature by changing it to false. Of course note that that would make all the revoked certificates, valid again.

{crl_cache, {ssl_crl_hash_dir, {internal, [{dir, "/etc/crl/"}]}}} here we're basically just saying "look in /etc/crl for the CRL".

In conclusion

Actually it's very straightforward! You just need to scour the internet for bits and bobs of information and put it all together into one whole package. This post attempts to provide that.

If you know any of this information to be wrong, or find something that could be improved, shoot me an email at me@greduan.com, help another poor soul.

References:

]]>
Devilboxを使ってローカルのWordPressサイトの最初のセットアップしよ https://greduan.com/blog/2021/04/06/devilbox-wordpress Tue, 06 Apr 2021 00:00:00 +0000 2021-04-06-devilbox-wordpress まずはDevilbox(デビルボックス)っていうソフトを紹介したいです。デビルボックスはすごく便利なソフトで、PHPのサイトを作成してるときや開発してるときすごくおすすめです。簡単に説明すると、機能満載のLAMPスタックみたいなソフトです、Dockerを使って作られたんですけど。

デビルボックスのHPはこちら: http://devilbox.org/

とその説明書はこちら: https://devilbox.readthedocs.io/en/latest/

じゃあさっそく使ってみましょう。

この記事はほぼデビルボックスのインストールの説明書と同じ内容になりますけど、日本語訳としてやくにたつと思います。

デビルボックスをインストール

これはデビルボックスのインストールの説明書とほぼ同じ内容なんですけど、日本語訳としてやくにたつと思います。

まずは最初に必要となるのはDockerなので、まだインストールしていない場合そこから初めてください、インストールをやり終わったらこの記事に戻ってください。

デビルボックスのインストールは結構簡単です。初めにデビルボックスのrepoをpullします。

(自分はホームから以下のコマンドを実行してる、~/devilboxになるため)
git clone https://github.com/cytopia/devilbox
cd devilbox

ここからは最低限の設定を入力していきます。

cp env-example .env
id -u (これはUIDと覚えておいてください、user IDの略です)
id -g (これはGIDと覚えておいてください、group IDの略です)

で、自分の好みのエディターを使って.envを編集してください。NEW_UIDをそのUIDに設定して、NEW_GIDをそのGIDに設定する形で。例えばこんな風に:

NEW_UID=1001
NEW_GID=1002

Macを使ってる方は

追加にこの設定をする必要があります、パーフォーマンスのために。

MOUNT_OPTIONS=,cached

これでインストールは完了です。

デビルボックスの起動の仕方

これはデビルボックスの起動の仕方の説明書とほぼ同じ内容なんですけど、日本語訳としてやくにたつと思います。

起動

cd devilbox
docker-compose up

特定のイメージを始めたければ、docker-compose upにさらにそのイメージの名前を使ってください。例えば:

docker-compose up httpd php mysql

実際WordPressの場合はその3つしか必要ないです。

停止

docker-compose down
docker-compose rm -f (これも重要です、デビルボックスの説明書によると)

再起動

docker-compose down
docker-compose rm -f
docker-compose up httpd php mysql

デビルボックスの使い方

WordPressをインストール

ここからは説明なしでなにをやったほうがいいか伝えたいと思います、今までもあんまりせつめいしてませんけどね(笑)。

  1. /etc/hostsを編集する、あとでhttp://hello.locを使えるようになります。

    127.0.0.1 hello.loc
    
  2. フォルダーを創る。

    mkdir -p data/www/hello
    
  3. WordPressのダウンロードしてdata/www/hello/htdocsのフォルダーに入れてください。http://hello.loc を開けたら、WordPressの最初の画面見れれば成功しました。

WordPressの設定

WordPressのデーターベースを作る必要があります。

cd devilbox
./setup.sh
(デビルボックスの中から)
mysql -u root -h 127.0.0.1 -p -e 'CREATE DATABASE hello;'
(パスワードはないです、エンターキーを押してください)

次はWordPressの設定。

  1. http://hello.loc を開いてください。
  2. 普通にセットアップ進んでください。
  3. データーベースの設定のところまでにたどり着ければ:
    • データーベースの名前はさっき創ったです、hello
    • ユーザーネームはroot
    • パスワードはないです
    • データーベースホストは127.0.0.1

おめでとうございます、上手く行った場合あなたはローカルWordPressサイトのセットアップが無事に完了しました。

ここからは普通のアドミンユーザーのセットアップに入ります。

英語読める方は

ぜひできれば英語で読んでください、僕は本当にこの記事で基本的な翻訳しかやってないんで。

]]>
Exiting early, cognitive load https://greduan.com/blog/2018/09/10/exiting-early-cognitive-load Mon, 10 Sep 2018 00:00:00 +0000 2018-09-10-exiting-early-cognitive-load Continuing from our last post about cognitive load, today we'll talk about another general practice that will do you good to simplify your code's logic and reduce its cognitive load.

This post will be about a pattern that I think is referred to as returning early or something to this effect, but since you could also throw to get out of it I like "exiting" early.

Here is the scenario for your function:

  • Takes one argument, could be an object or an array.
  • If it's an object, it can't be empty.
  • If it's an array, it can't be empty.
  • It can't be anything else.
  • If any of the above conditions aren't fulfilled, throw. (You can do whatever you want in your own code, but I'll be throwing.)

There's many ways to go about this logic, of course, but here's how I'd go about it with the commandment in mind "Exit Early".

// For brevity, we'll use Lodash in our example
const myFunction = theArg => {
  if (!_.isArray(theArg) || !_.isObject(theArg)) {
    throw new Error('theArg must be an object or array.');
  }

  if (_.isArray(theArg) && theArg.length < 1) {
    throw new Error('theArg cannot be an empty array.');
  }

  if (_.isObject(theArg) && Object.keys(theArg).length < 1) {
    throw new Error('theArg cannot be an empty object.');
  }

  if (_.isArray(theArg)) {
    // Array logic here
  } else {
    // Object logic here
  }
};

Do you see the pattern? Perhaps it's more obvious if we show the worst alternative I can come up with:

const myFunction = theArg => {
  if (_.isArray(theArg) || _.isObject(theArg)) {
    if (_.isArray(theArg) && theArg.length > 0) {
      // Array logic here
    } else {
      throw new Error('theArg cannot be an empty array.');
    }
    
    if (_.isObject(theArg) && Object.keys(theArg).length > 0) {
      // Object logic here
    } else {
      throw new Error('theArg cannot be an empty object.');
    }
  } else {
    throw new Error('theArg must be an object or array.');
  }
};

Now you be the judge as to which one is easier to read. For me it's the first one, without a doubt, even if actually it's more characters I have to type out as I have to repeat some conditionals.

The basic pattern is to identify which logical branches would prevent your code from working, and handling them first. Your if clauses handle the wrong data first.

This flatens the function structure, makes reading more streamlined, reorganization of code simpler, and is simply easier to think about, as there are fewer logical branches.

]]>
Assigning variables, cognitive load https://greduan.com/blog/2018/09/01/assigning-variables-cognitive-load Sat, 01 Sep 2018 00:00:00 +0000 2018-09-01-assigning-variables-cognitive-load Something I see I have to point out during code reviews with some degree of frequency is the concept of code layout and how it affects cognitive load.

I don't remember where I learnt this originally, but it was probably a combination of articles and it was some time ago, it would thus be hard to give you a list of sources.

But here is the basic concept of it.

First let's define cognitive load, to be clear. In this case when I say cognitve load I'm referring to how many things I have to keep in mind to understand how a piece of code will be interpreted by the computer. So there is some degree of cognitive load I need to have, like which file I'm in, which function, the function arguments etc.

But, there are things a programmer can do to worsen the cognitive load, and reversely, to improve it.

In this post we'll explore just variables. The simple ways in which you can improve cognitive load by simply using less variables.

Let's start with an easy example:

const something = process(data);
return {
  something,
};

That is easily refactored to:

return {
  something: process(data),
};

Why is that better? Because there is no need for my eyes or my mind to jump around. At no point do I need to remember what something was assigned to. It's just there.

Now in that example it happens to be obvious. Let's look at another example that is more problematic.

const process = data => {
  const partOfTheData = extract(data);

  // 20 lines of code here, partOfTheData is not used..

  const someDataWeJustFetched = await fetch(...);
  const secondPartOfData = extract(someDataWeJustFetched);

  // 20 lines of code here, partOfTheData is not used..

  return {
    // other stuff ...,
    partOfTheData,
    secondPartOfTheData,
  };
};

That one may not seem so evil, but it is the same idea. It could be improved as:

const process = data => {
  // 20 lines of code here.

  const someDataWeJustFetched = await fetch(...);

  // 20 lines of code here.

  return {
    // other stuff ...,
    partOfTheData: extract(data),
    secondPartOfData: extract(someDataWeJustFetched),
  };
};

Now that assumes that somehow someDataWeJustFetched is used in those 20 lines of code or something, otherwise we can do even this:

const process = data => {
  // 40 lines of code here.

  return {
    partOfTheData: extract(data),
    secondPartOfData: extract(await fetch(...)),
  };
};

You see?

The basic concept is as follows:

  1. If you can get away with not defining a variable, consider not defining it.
  2. Use the direct value instead of a variable where possible.
  3. If you must use a variable, define them as close to their usage as possible.

Note: The following section is not really convincing with its example, so I want you to think about the point I'm making as opposed to the exact example I'm showcasing. :)

Now, somebody will be sharp enough to notice that these examples don't apply to duplication. What about deduplication? I use one variable several times?

Then the conditions change.

Now I swear I read this in an article pointing this out, but I can't find it so I'll do my best to present the argument.

Let's say you have a function which takes in an argument and returns a result from it. At some point, the input has to be squared and it's used in several spots.

const getSomeNumber = inNumber => {
  const squared = inNumber * 2;

  // <use of squared>
  // 20 lines of code.
  // <use of squared>

  return result;
};

In that case, when reading the instances where you use the squared variable, you're introducing cognitive load. Why?

For a simple matter that when I read squared I have to remember what it means, no matter how simple, I have to remember.

Perhaps you'd be better served to use inNumber * 2 several times instead. Following point number 2 above.

"But doesn't that break Don't Repeat Yourself?" I hear you say. And the answer is actually yes. But duplication in and of itself is not evil. It's stupid duplication that is. And in this case, squared is always squared. It takes the same amount of effort to search and replace squared with newValue than inNumber * 2 with newValue. But the latter is easier to read.

That's all my thoughts on that subject. Hopefully you learnt something, or at least I got you thinking.

]]>
Git config https://greduan.com/blog/2018/08/27/git-config Mon, 27 Aug 2018 00:00:00 +0000 2018-08-27-git-config Here's something I do every time I get a new computer, it's important enough that I have a script for it in my dotfiles. You can find the latest version of that script in my dotfiles.

I thought I'd explain each line in this blog post, should make for somewhat interesting material. But all of these you can find explained in the Git config man page.


All of my commands I run using git config --global, and then the command, so I will omit hat from my settings since it's for literally all of them. Please be aware of if you wanna use --global or not.

user.name "greduan"
user.email "me@greduan"
github.user greduan

These are basics, actually you need at least this to be able to run Git. I believe this is in every Git "getting started" guide or something like that.

Though the github.user used to be recommended by GitHub some years ago, I think it's no longer necessary or relevant.

commit.gpgsign true

In some setups I use this, in others not. At the time of writing this, this is commented out in the script.

Be aware that if you want to use this, you need to invoke user.signingkey <key> first. Here's the Git book on the subject of signing with GPG.

commit.verbose true

Basically makes sure that the diff that you're committing is put in the commit file passed to your editor when you run git commit.

This is basically the same as passing the -v/--verbose option to your git commit.

core.editor nvim

Self-explanatory. Which editor to use when editing the commit messages or running an interactive rebase.

core.excludesFile ~/.gitignore

This one is one of my favorites, and a really easy one to apply. Allows you to setup a .gitignore file that will be active on all your projects. Very useful for system files and editor files, stuff to do with your environment, as opposed to your project.

For example it'd include stuff like .DS_Store, or Emacs backup files and so on.

core.pager "less -R"

The pager is what your Git uses when you ask for example for git log or git diff, it's what shows more than one page of text in the terminal.

This has to do with how less handles colors, that's what the -R option is for. It's some fix I needed at some point and it's stayed there since then.

core.ignorecase false

This is one strategy to work around the macOS problem that he default file system isn't case-sensitive, this causes problems with Git. I don't remember having this trouble recently at all so it probably worked.

help.autocorrect 1

Automatically execute a typo-ed command if it recognizes that there's no alternative. Actually -1 would do it immediately while 0 would not do anything and >0 would be tenths of a second to wait before executing it.

color.ui true

Colors stuff like git log and so on when outputted to the terminal. If set to always it'd be always, even if not on a terminal.

core.eol lf
core autocrlf input

To do with the line ending format. I like my Unix, so I set it to lf.

I set autocrlf to input so that it doesn't mess around with the files.

merge.conflictstyle diff3
merge.tool vimdiff
mergetool.prompt false
diff.algorithm patience
diff.compactionHeuristic true

Just some settings on how to handle diffs. I don't remember the details on these, and they may not even be relevant now.

They simply set some settings on with which editor to handle the diffs and (potentially) improve the algorithm with which to run the diff to produce smaller diffs and/or some more specific diffs. This is up to your preferences, really. Read up on it.

push.default simple

You'll have to read the details in the man page, but this is just how I expect my Git push to behave.


And that's the end of that blog post. Longest etchnical one in a while, hope you got some good tips out of this one.

BTW, this was written while listening to seiyuu radio shows. Very enjoyable pastime nowadays.

]]>
On Adblockers https://greduan.com/blog/2018/03/03/adblockers Sat, 03 Mar 2018 00:00:00 +0000 2018-03-03-adblockers It has come to my attention that people are not aware of the current adblockers situation, so here's the short version of what you should be doing:

Use uBlock Origin by gorhill.

  • It is open source
  • It does not have sponsored ads, unlike Adblock Plus
  • It is the fastest adblocker
  • It is very powerful
  • It's always been available on both Chrome and Firefox, the same code, don't have to make a different choice depending on your browser.

For more points about whether it's better than ABP, check uBlock Origin's wiki page on it.

For more information on its advanced mode and so on, check the wiki page on that.

Of course the important question, should you actually make the switch? I say you've nothing to lose and plenty to gain. Try it, it's literally uninstalling your current adblocker and installing another one. Maybe you'll feel a difference, maybe not, but point is that you're doing the right thing.

Please note, uBlock and uBlock Origin are different things. The former is a fork from uBlock Origin, I know there was some drama there but I don't care about it now so I've forgotten what the drama was. I stuck with the decision to use the original some time ago and I haven't re-evaluated, but if you think it's worth evaluating the decision yourself, do so.

Of course feel free to research all of this on your own, but be aware that Adblock Plus is not the best option and it hasn't been for a while, there are other options out there.

]]>
New website https://greduan.com/blog/2018/02/19/new-website Mon, 19 Feb 2018 00:00:00 +0000 2018-02-19-new-website So there's a new website and blog, in both design and engine.

So the basic changes are:

  • https://projects.greduan.com no longer exists. Didn't offer value.
  • https://blog.greduan.com is now https://greduan.com/blog
  • https://greduan.com/gpg-pub-key.txt is now https://greduan.com/gpg
  • New design! Less CSS. Less web obesity crisis. I wasn't particularly contributing to it before anyway, but now it's even less, and I'm getting a taste for this simple aesthetic anyway.
  • New me! Which means different topics, different writing styles, and much more JS expertise.
  • The website is no longer open source. That did not actually provide much value and was a bit of a PITA, somehow.
    • The content license can now be easily read at https://greduan.com/license
  • No longer hosted on DigitalOcean (which was hella fast) using nginx, now hosted on Now using simple Fastify server. Makes for much easier deployments and site management.

In case you're wondering, nowadays (at time of writing) I work for a company called Impala, in the hotel PMS industry. If you're in that industry, I am sure we're of interest. I work full time remote, which is quite nice.

Nowadays I live in Grenchen, Switzerland, the year before that I lived in Amsterdam, The Netherlands.

Look forward to new content (hopefully!). I'll be talking about hopefully more valuable topics than just my experience with stuff.

]]>
How to run a Promises array in a series https://greduan.com/blog/2016/06/17/promise-series Fri, 17 Jun 2016 00:00:00 +0000 2016-06-17-promise-series Put the following code in a file and run it with Node.js:

var calls = [];

var promises = [
  new Promise(function (resolve) {
    setTimeout(function () {
      calls.push('first');

      resolve();
    }, 100);
  }),
  new Promise(function (resolve) {
    calls.push('second');

    resolve();
  }),
];

setTimeout(function () {
  console.log(calls);
}, 100);

Please be aware the following code is bad practice, I'm creating a side-effect with a Promise, and side-effects like that can be hard to debug.

Why did calls have content? And why was it ['second', 'first'] and not the other way around? That's because of how Promises behave, they execute as soon as the JS engine goes over them, not when you call .then() on them, and the first one runs (approximately) 100ms after the second one because of the setTimeout.

So then, can we somehow run Promises synchronously? Even if that sorta defeats the point of Promises? Yes you can.

You can game the JS engine a bit.

Try running the following:

var Promise = require('bluebird');

var calls = [];

var promises = [
  function () {
    return new Promise(function (resolve) {
      setTimeout(function () {
        calls.push('first');

        resolve();
      }, 100);
    });
  },
  function () {
    return new Promise(function (resolve) {
      calls.push('second');

      resolve();
    });
  },
];

Promise
  .each(promises, function (promise) {
    return promise();
  })
  .then(function () {
    console.log(calls);
  });

The output is ['first', 'second']! How are the Promises running synchronously?

The answer is simple, first, they are now defined inside function, the function's contents aren't executed until the function is invoked, which is done by the Promise.each, and the way Promise.each works is that if you return a then-able it will wait until the then-able resolves in order to continue with the next thing in the loop.

And that's it, because we're not executing the function until the previous function's output's then-able resolves, the Promises are run in order.

It's a simple yet clever trick. Many thanks to my coworker @pateketrueke for figuring this stuff out with me.

]]>
File navigation in Vim (my way) https://greduan.com/blog/2016/04/09/fnivmw Sat, 09 Apr 2016 00:00:00 +0000 2016-04-09-fnivmw I'm going to talk about a flow I've developed recently, for myself, for navigating files within Vim.

Note I say Vim in the post title but I use Neovim, I just felt like saying that, because it makes no real difference to this post's content.

Let's just start with saying that I use fzf for finding files in a fuzzy matching manner. This is incredibly convenient. You can of course use CtrlP or whatever you prefer, but I use fzf.

So that's one way I navigate them, another way that, in combination with the above, is actually quite powerful, is with netrw-like or netrw enhancing plugins.

Namely vim-vinegar and vim-dirvish.

Why do I use two conflicting-looking plugins?

I use Vinegar because Vinegar provides a map of - to open the current folder in netrw, or go up one folder if already in netrw. But netrw isn't Dirvish, don't worry, cause Dirvish hijacks netrw so when you open netrw, Dirvish opens instead.

Dirvish is what makes this setup cool for me, so if you haven't already, read its README file.

Anyway, I wanted to document that. I'll also share my related rc config stuff so that you and I can reproduce this behaviour easily:

Plug 'junegunn/fzf', { 'dir': '~/.fzf', 'do': './install --no-update-rc' }
Plug 'junegunn/fzf.vim'
Plug 'tpope/vim-vinegar'
Plug 'justinmk/vim-dirvish'

" Relative line numbers in a Dirvish buffer
autocmd! FileType dirvish setlocal relativenumber

Update 2016-06-14

Somebody on Twitter actually let me know that you don't need vim-vinegar to have the usage of the - keybind, vim-dirvish added it to itself now, which is great.

So you can just have vim-dirvish installed now and that'll work out great. :)

]]>
Two months of OpenBSD https://greduan.com/blog/2015/06/18/tmoo Thu, 18 Jun 2015 00:00:00 +0000 2015-06-18-tmoo September 1st: This blog post was originally written back in June 18th, and it was a draft so I didn't publish it, but it's been here for so long I decided to publish it.

If you haven't already, read my previous blog post about OpenBSD where I shared my experience of switching to OpenBSD.

OK, in this blog post I am going to share my experience of using OpenBSD on my main rig for 2 whole months, and why I switched back to Linux (gasp!). I am writing this in Arch Linux which I just installed today, in case you're wondering.

Let's start off by saying, it has probably been my favorite experience from an OS. It is certainly the first OS (after the CRUX distro) that I did not feel dirty installing or using. With Arch I always have this itch in the back of my head which makes me uncomfortable using the OS.

The lack of GNU in my coreutils and ksh being the default shell was a very nice feeling.

It certainly felt weird to not be using a rolling-release OS, and I never installed any patches or -current so I probably missed out on a whole experience.

All I'm saying is, I didn't leave OpenBSD because I didn't like it. I loved it! I am leaving OpenBSD because the Node.js support on it is weak. And I know the guy maintaining the node package is doing his dangdest, but sadly without nvm or something similar one can't have a good Node.js dev environment, and nvm depends on the prebuilt binaries Node.js offers, which do not have BSD versions. :/

For those unaware, when working with Node.js one often has to work with several versions of Node. 0.10 being stable, 0.12 being the stable but sorta new Node.js, and io.js being the absolute newest and least stable. Because you work with different versions depending on your client or your project, you need to switch between these versions. nvm offers a cool feature where you can do nvm use 0.10 and boom, your path now has Node 0.10 in the PATH instead of 0.12 or whatever.

The lack of a tool like nvm is incredibly inconvenient, and that is why I'm switching back to Linux. So it's nothing personal, it's just Node.js is my job and I need nvm.

In the future I will definitely be switching back to OpenBSD if the situation with Node.js improves, get on it devs! :)

]]>
Experience upgrading OpenBSD to 5.7 https://greduan.com/blog/2015/04/30/euot57 Thu, 30 Apr 2015 00:00:00 +0000 2015-04-30-euot57 In short, it was way way way way simpler than I expected it to be. It took less than 10 mins to update OpenBSD from 5.6 to 5.7. And then another half an hour to update all my packages with # pkg_add -u.

The reason I was scared is because I am unfamiliar with this sort of upgrade procedure, I am used to rolling distros where to update you just run one command every once in a while and you're up-to-date. I never used Debian or Ubuntu extensively so I didn't get to experience freeze periods, OS version numbers etc.

I mean there's not much more to say about that. Just so that it's not a really short post I'll lay out the steps I took:

  • Downloaded the install57.fs
  • dd if=install57.fs of=/dev/sd0c bs=4M (of= may vary for you)
  • Booted into USB
  • Chose (U)pgrade instead of (I)nstall or (A)uto Install
  • Went through procedure
  • Rebooted
  • Read mail (just reports)
  • # sysmerge
  • # pkg_add -u

And that was it. It was way simpler than I expected.

I will be upgrading to -current soon enough so I can write about the experience, also I need the latest sort to have the nvm bug fixed.

EDIT:

Today is May 2nd. I just realised there is an upgrade guide for 5.6 to 5.7. I just read that today and I ran the steps, I missed this part of the upgrade earlier. Oops.

]]>
Indentation and hooks in Emacs https://greduan.com/blog/2015/04/29/iahie Wed, 29 Apr 2015 00:00:00 +0000 2015-04-29-iahie I've been using Emacs on and off for around half a year, maybe a little bit more, more recently I've been using it daily because I've been using Org-mode more and I've been editing code with it.

Something that has always bothered me about Emacs is how dang difficult it is to manage indentation configs and stuff like that. I've never been able to have a nice clean way to have a per-filetype based indentation config.

This is coming from the perspective of an adept Vim user, as it isn't actually that hard in Emacs, it's just tedious, while in Vim it's just one line.

For a while I was just using the editorconfig Emacs plugin, but it doesn't work right, since it only checks the config for a file once, and that is when the file is opened. If the .editorconfig file changes you need to close/open the file or close Emacs and open the file again.

BTW if you don't use editorconfig in your projects, this is a great chance to start now. It only makes sense to use it.

After a bit I started playing around with hooks.

Now for me the problem with hooks is that they're a bit verbose. That's not really a problem, it's just me not used to how verbose Lisp can get sometimes.

I am here to help you out with Emacs hooks.

Searching for mode hooks

This is quite simple, if you're not familiar with Emacs' built-in help system, you really should look into it (C-h C-h).

Do C-h v to look for a variable, then start typing the name of the mode, let's say shell-script and then press tab, or ? works as well. Probably one of the last results you'll get is shell-script-mode-hook. Try it for other modes you're interested in, you'll probably be able to find them this way.

Now there are some caveats, apparently not all hooks are made equal. For example, javascript-mode-hook doesn't do anything, but js-mode-hook does, even though the major mode is called javascript-mode.

My indentation strategy

Now it took me a bit, but I deviced a system that works pretty nicely and is not very verbose for my tastes, it is a bit tedious though if you don't have Lispy or Paredit.

Anyway, the strategy I have is simple. First I define a default indentation setup:

;; default
(setq-default tab-width 4)
(setq-default tab-stop-list (number-sequence 4 100 4))
(setq-default indent-tabs-mode 1)

You can find out what tab-width, tab-stop-list and indent-tabs-mode do with C-h v, and about what number-sequence does with C-h f.

Those are some defaults that I want to have on every file for which I haven't defined something else.

Next I define a utility function:

(defun my-tabs-stuff (tabs length)
  (setq indent-tabs-mode tabs)
  (setq tab-width length)
  (setq tab-stop-list (number-sequence length 100 length)))

What this function does is it just tells the buffer in which it is being run if we should use real tabs or fake/space tabs (indent-tabs-mode), then it goes on to tell it how long a real tab should look (tab-width) and what Emacs should treat as a tab when dealing with spaces (tab-stop-list). That's all our function does.

Then we go on to define hook functions, these are the functions we are going to refer to as the function that gets called when the hook is triggered.

(defun my-emacs-lisp-hook ()
  (my-tabs-stuff nil 2))
(defun my-shell-script-hook ()
  (my-tabs-stuff 1 4))
(defun my-js-hook ()
  (my-tabs-stuff 1 2))

So we are just defining one function per mode, which in my case only call one function each, the function being my-tabs-stuff and passing it the arguments for what I want as indentation settings in that mode.

Now all that's left is to add these functions to the hooks:

(add-hook 'emacs-lisp-mode-hook 'my-emacs-lisp-hook)
(add-hook 'shell-script-mode-hook 'my-shell-script-hook)
(add-hook 'js-mode-hook 'my-js-hook)
;(setq js2-mode-hook js-mode-hook)

Note the last one, you can basically alias one mode's hooks to another's by using something like what you see in the last line.

Hopefully that helps you out, I've wasted too much time with Emacs figuring this out, or not figuring it out and suffering the consequences. So hopefully this saves you some time. :)

]]>
My switch to OpenBSD, first impressions https://greduan.com/blog/2015/04/19/mstobfi Sun, 19 Apr 2015 00:00:00 +0000 2015-04-19-mstobfi So I switched to OpenBSD, and this blog post is here to talk about my first impressions. This probably won't be my last blog post on the subject.

So that you can understand how I use my distros, "ricer" is usually a term used to refer to people that change the look of their setup to make it look very attractive, outside of the defaults of whatever environment they have. Take a look at /r/unixporn for many good examples of ricing.

An under-the-bonnet ricer means the ricer only looks to improve the workflow or the commands and stuff they have available to them, not the looks. I am an under-the-bonnet ricer to the core.

Because of my nature I've had to reinstall Arch 3 times because I broke it and have been using CRUX for a while, cause that's a fun distro to play with.

OK, on with BSD.

Why?

Why OpenBSD? Why not FreeBSD or NetBSD or DragonflyBSD or any other BSD? Why BSD in the first place?

I've been a Linux user for several years, and more recently I've been getting into being all POSIX-compliant and stuff and GNU's coreutils have been grinding on my nerves with that stuff.

So even though Linux is awesome, and compiling it is fun, the OS on top of it I don't like, so I wanted to switch to something better, that something was BSD.

Sidenote: Why does the GNU sort command have an -R flag which randomises the result? You can't sort something into being random. That's an oxymoron (with a particular choice of definitions).

Now, why OpenBSD instead of another BSD? First of all because my friends at Nixers.net prefer OpenBSD (those that use a BSD). It's good to switch to a system where you know several people that know it. Makes the switch much more fun.

Secondly, in December I did try to switch to FreeBSD. It was a chance I had to switch, but I had trouble getting X to work and at that point I really needed a working OS. This time I didn't want to deal with the X stuff so I just went ahead and installed OpenBSD which I had heard had excellent X support out of the box, and holy crap it does.

And thirdly because of the security orientation that the whole project has. That is a really attractive feature for me.

First impressions

Short version: I'm lovin it.

Keep reading for the long version.

The install

Getting the USB stick ready was unique. I downloaded the install56.iso but that didn't work when I dd'd it into the USB stick. So then I read the INSTALL.amd64 file and it uses the .fs file for the USB stick, not the .iso file, so I downloaded that and dd'd it and it worked. So that was new.

The install was certainly "weird" for me, coming from more manual Linux distros where I format the harddrives, mount the partitions, write the fstab, etc. all manually. It was pleasant though, somehow I don't feel dirty with a clean install of OpenBSD as I do with a clean install of any Linux. Probably the lack of GNU. lol

But yeah, I was expecting a slightly more graphical install, since I already experienced the FreeBSD install, but I'm fine with text prompts. It's still simple enough.

X and hardware support

The X support was incredible, simply incredible. I enabled xdm to start with but quickly disabled it cause I've my own .xinitrc file. Simply put, if I don't mention it it's because it worked perfectly.

The only thing that isn't supported is my wifi card. A dreaded BCM4315. That would have been a deal breaker some months ago but now I have an extra long ethernet cable so it's fine. This is a laptop though so I need to buy that wifi dongle...

I am having a bit of trouble with the lid though. Closing it suspends, which is fine but then when I open it it's all black. I pressed butons and stuff and it didn't turn on again so I'm guessing for some reason my monitor doesn't wake up. Dunno what's up with that.

Sidenote: I've a Dell Vostro 1500 from 2007, with an Intel Core 2 Duo 2.0GHz.

Ports/packages system

The ports/packages system is something I really like in OpenBSD. Kinda sad CVS is still used over Git for the ports, but that ain't gonna stop me from liking it. Seriously though y u no Git?

I like how it's decentralised. A ton of mirrors counts as decentralised for me. lol

I like how the $PKG_PATH variable works. I'd be fine with this setup if it was in some Linux distro.

The pkg_add command works very well as well. It lets me know of the all the stuff it's installing, and when it installs dependencies it lets me know what package requires that dependency. Makes it easy to tell what piece of software is installing a ton of dependencies you don't want. :P

Being productive again

First of all, thank you to BSD Now and their tutorials, especially this one: http://www.bsdnow.tv/tutorials/the-desktop-obsd

As a Node.js dev it can be a little hard to get started if you're used to using nvm because of a little bug which I already reported. By the time you read this it'll already be fixed most probably.

Though Node v0.10 is included in the packages so I can still work, just not with the latest and greatest. I expect that to get updated to v0.11 or v0.12 in OpenBSD v5.7 though.

Other than that, everything has been very smooth so far. Some software I like isn't in the packages but I can compile that myself so there's no issue, and it may even be in the next release so I'm not too worried.

The first day I installed OpenBSD in the evening. The next day I spent some time figuring out how stuff worked, basics here and there etc. Getting up to a working productive state again. The next day I was completely productive again. So the downtime was really just the evening while I was installing it and figuring out basics.

I'd say that from the moment you plug in the USB to when you get back to work again is like less than half an hour, honestly.

Final verdict

If you're considering switching to OpenBSD, totally go for it. There is nothing stopping you but yourself, like seriously.

However, definitely to make sure your hardware is supported. I spent an hour trying to figure out why the wifi didn't work because I assumed it worked like in Linux.

]]>
December 2014 to April 2015 https://greduan.com/blog/2015/04/18/d2ta2 Sat, 18 Apr 2015 00:00:00 +0000 2015-04-18-d2ta2 So first of all, I'm sorry that I haven't written any blog posts in the last few months. I don't need to apologize, but I feel like doing it. If you felt you deserved an apology, I don't see why but I already apologized, so there.

OK so I'm just gonna give you a quick update of what's happened in these months.

Baiscally, in December I started a move which didn't end until the 26th of December or something like that. During that time I had no internet, why? Cause I had messed up my Arch Linux install, but I didn't have time to reinstall, so yeah I was stuck without internet or a laptop for almost a month.

Then the first of January of 2015 or something like that I got internet access. I spent one week debating whether I should install Arch again (cause I was tired of it) or if I should go with something new. I wanted to install *BSD but I had trouble so in the end I went with the CRUX Linux distro.

So I've been using CRUX for 4 months, until yesterday when I installed OpenBSD.

Today I've been having fun discovering OpenBSD. I'll soon have a blog post about my experience with OpenBSD.

]]>
Respect, for respect is acknowledgement, and acknowledgement is a right https://greduan.com/blog/2014/12/08/rfriaaaiar Mon, 08 Dec 2014 00:00:00 +0000 2014-12-08-rfriaaaiar I've been thinking, and I think you should respect everyone. No matter the circumstances, who the person is or what they've done.

It doesn't need to be respect for his actions, circumstances or for what he/she has.

It should be respect for the fact that he/she is a human being.

Acknowledgement is important. When somebody says something they say it expecting an acknowledgement of the fact that their communication was understood.

When somebody does something they do it with an expectancy of somebody they care for or look up to to acknowledge the fact that they did that. Think the kid who cleans his room without anyone telling him, he wanted an acknowledgement, whether that acknowledgement is candy, dessert, or simply gratitude for the fact that he cleaned his room.

When a teen is being rebellious, he is doing it because previously he did not get an acknowledgement for who he is, and is expecting, knowingly or unknowingly, an acknowledgement for who he is.

So do try to acknowledge everyone for who they are. You don't have to agree, but you do need to acknowledge, for that is a human right. (In my humble opinion.)

]]>
Figuring out when you installed Arch Linux https://greduan.com/blog/2014/12/07/fowyial Sun, 07 Dec 2014 00:00:00 +0000 2014-12-07-fowyial I just figured out a cool trick to check when you installed your current Arch Linux install.

All you have to do is check the logs for pacman which can be found at /var/log/pacman.log. Go to the top of the file and look at the date.

Looks like my current install was installed at 2014-04-16 15:48. Longest running install yet.

Of course this trick depends on the fact that the logs still exist. If you cleared those logs then you can't really use this trick. Although IMO this is one log you shouldn't delete, considering how valuable its data could be.

]]>
Minimal amount of fonts in Arch Linux https://greduan.com/blog/2014/11/21/maofial Fri, 21 Nov 2014 00:00:00 +0000 2014-11-21-maofial Thought I'd document this here.

Some weeks ago I decided I'd find the smallest font packages I could find but that would still cover the biggest amount of Unicode.

After consulting with Reddit I came up with the following list:

  • ttf-dejavu because some programs just need it and it's a pretty global "default" font in open source.
  • ttf-liberation an extremely good looking alternative to some MS fonts. Excellent for default serif, sans-serif and monospace fonts.
  • adobe-source-han-sans-otc-fonts for full CJK support.
  • ttf-noto-nockj for big coverage of Unicode that isn't CJK.

Just thought I should share that, considering some other people may be looking for the same thing some time in the future. :)

]]>
Get it together Linux users/devs! https://greduan.com/blog/2014/10/05/gitlud Sun, 05 Oct 2014 00:00:00 +0000 2014-10-05-gitlud This is a rant for Linux users. Take it as an (angry) man letting people know his opinions.

First let's start with unity. (GNU/)Linux users are either united or they're not. The attitude in this community is that if you don't like something about the distro you're using just go ahead and make a new one. Yeah thanks for the advice.

You don't like something about the WM you're using? Make a new one.

And I'm not particularly bothered by the "put it up or make it up" attitude, that's completely fine. What bothers me is how separated the WHOLE community is.

There are thousands of distros, all with their own spin on stuff. Most, if not all of them feel some kind of elitism for the distro they use.

Arch Linux users, their distro is incredibly awesome. In my years with it (a couple) I haven't seen a single political debate over whether one approach to something should be used over another (i.e. Systemd vs. Busybox or something). I mean of course users argue, that's always gonna happen because Linux seems to be infested with angry people all over the place. But for a team to be arguing about what direction to go (i.e. Debian) I find extremely ridiculous.

OK so thousands of distros. Hundreds of window managers, floating, tiling, for christ's sake give it a rest!

This situation where it's everybody against everybody is the exact reason why stuff like OS X is more attractive to users. TBH it's more attractive to me, the only reason I don't use it, although I have been tempted countless times, is because I have a strong resolution to not buy into a commercialism mentality which is exactly what Apple has. New product comes out GO BUY IT!

And in all their super nice hardware and commercialist attitude, they got one thing right, although not exactly in the best way. What they got right was that they made it very easy for developers, develop for the latest version or your software will go unused. They do it through planned obsolescency, but Arch Linux could very well do it just because all the people that use it will only use the latest of the stuff.

They got another thing right. There is no divide in the developers. there is no debate whether to use Qt or GTK+. Everybody has only one option. And I'm not saying the debate is bad, but the sheer amount of time that we spend in these debates is just ridiculous. I'll settle it for you. GTK+ can use C OR C++, so it's better in that sense. Qt is limited to C++ BUT it's got way better cross-platform support. There, that's what the argument should boil down to.

In the end I'm sure somebody will be like "oh if it's so easy why don't you do it?" or "just do the software you want then!".

That's not my point. My point is STOP BEING SO ANGRY AT EACH OTHER!

Windows devs get one choice, a really shitty one but it's one. OS X devs get one choice, perhaps great or perhaps shitty, I dunno, but it's only one choice. Linux devs get hundreds, if not thousands of distros to choose from. They need to package in who knows how many formats. They may need to develop in I dunno how many frameworks for shit to work right.

Linux is sorta Lisp all over again. Cool great, you convinced me, I wanna use Linux, where should I start? Well you've got all of these distros to choose from. At least with Lisp the argument is easier. OK I wanna do Lisp, which one should I start with? Well depends on your program, and if you don't like one you can just install another. With Linux you gotta re-install the entire OS to change it.

Point is, stop fighting. Figure out a distro you like, use it, be nice to each other, don't give somebody shit cause they're newbs or they don't know how to use the CLI or when they don't use the distro YOU like. Let me remind you, we want more casual users. The way we're going we may never have a lot of casual users like OS X or Windows.

I dunno if I got my point across, but at least I said it so now I feel better. Just be nicer. I don't care what you do or how much time you have to waste for it, just be nicer and more united. Please.

That is all.

P.S.: The post title is a play on words, get it?

]]>
The bleeding terminal background inside Vim + Tmux problem https://greduan.com/blog/2014/09/10/tbtbivtp Wed, 10 Sep 2014 00:00:00 +0000 2014-09-10-tbtbivtp I need to document this as a reminder for myself and as a saviour blog post to any person out there in the internet that uses Vim and Tmux.

Read the following blog post. That is all I'm going to say.

http://sunaku.github.io/vim-256color-bce.html

Solved a problem I've had for years in Vim. Thank you random internet person.

Hopefully this helps you as well, random internet person.

EDIT:

Thought I should probably post the solution as well:

if &term =~ '256color'
	" disable Background Color Erase (BCE) so that color schemes
	" render properly when inside 256-color tmux and GNU screen.
	" see also http://sunaku.github.io/vim-256color-bce.html
	set t_ut=
endif"
]]>
Getting used to software updates https://greduan.com/blog/2014/09/10/gutsu Wed, 10 Sep 2014 00:00:00 +0000 2014-09-10-gutsu I noticed that some years ago I used to seriously look forward to every update that any software I was using got. Like I'd read all the changelogs and I'd be constantly checking to make sure there wasn't an update to some piece of software that didn't update itself.

That's also a reason I got into Arch in the first place I think, besides the minimalism and how I like to build my environment from the ground up. The reason being that it has a rolling release model, so all of my software is as up-to-date as it can be. Almost, I mean it doesn't go into betas, just into every "stable" release according to the piece of software.

I think using Linux, specifically Arch Linux, for several years now has desensitised me to software updates. Where now I don't look at every update, instead I look at major updates, updates to the browser, to my text editors, etc. Stuff of which you have to be aware because a lot has changed since you last used it, those kind of changes.

Just thought I would document that I guess, there's not more to this blog post.

]]>
Barebones file navigation in Vim https://greduan.com/blog/2014/08/24/bfniv Sun, 24 Aug 2014 00:00:00 +0000 2014-08-24-bfniv This post is mainly a rip-off of this talk that happened on a Vim London meetup titled "Bare Bones Navigation, by Kris Jenkins": http://vimeo.com/65250028

You can find the slides here: https://github.com/krisajenkins/bare-bones-vim

He ends the talk with a slide that has the following:

" :find
set path=**
set suffixesadd=.java,.py

" :find gets better more
set nocompatible
set wildmode=full
set wildmenu
set wildignore=*.class,*.pyc

" :ls & :<number>b

" :Explore
" :e scp://host/some/where/file.txt

So I'll just quickly explain those. This post is mainly for reference for myself but I still hope it helps you. :)

Also I've challenged myself to use only these built-in commands for a while instead of some fancy FZF or Unite.vim or anything of the sort.

Let's start with :find. This command just finds whatever filename you give it, no auto-completion just give it a filename and it'll find it. It needs for path to be set to ** for it to just find any file in the current directory.

The definition of path can get very detailed and complex, so go ahead and go nuts on your definition. It can also be comma separted, in case you have specific paths you like.

suffixesadd is for :find, it allows you to skip the file extension and allow :find to still find the right files. Set it up so you can save on some typing. Comma separated.

nocompatible don't need to explain this one, if you're using Vim just use it.

wildmode that sets up the kind of auto-completion that Vim has for the :. While he sets it to full I always set it to list:longest, this is preference though.

wildmenu from what I understand makes it so that the auto-completion isn't all on one line, so that it uses several lines to show the auto-completion.

wildignore is to define files to ignore. Set it to the files you mostly never want to edit because only the language uses them, not the programmer.

Then he talks about switching buffers with :ls and :b[uffer]. Use :ls to list your buffers and use :b[uffer] to switch to a buffer by buffer number. While he uses the buffer number before the b, you can also use it afterwards.

Then it's :Ex[plore] (which includes :Sex[plore] and :Vex[plore]) and :e[dit]. The :{E,Ve,Se}x[plore] commands open up the built-in netrw file explorer which is a built-in plugin, which means it is not included if you use vim -u NONE in order to not load any config or plugins.

VimCasts has a blog post about this that I suggest reading so you get familiar with this plugin and how to use it: http://vimcasts.org/blog/2013/01/oil-and-vinegar-split-windows-and-project-drawer/

And finally :e[dit] is a built-in Vim command and all that there is to it is to give it the full path of the file you want to edit. The path you give it is relative to the current directory.

That is all on that subject. While I'm on it though, I would suggest that if you are in a setup where you can have a couple of plugins, I really suggest you install the following plugins by Tim Pope:

That is all, hope this helps. :)

]]>
Navigating in the dark https://greduan.com/blog/2014/08/21/nitd Thu, 21 Aug 2014 00:00:00 +0000 2014-08-21-nitd Just thought I'd quickly write a couple of tips to see in the dark, besides having good eyes but those are not required.

Right, so I mentioned this to my friend Raam Dev and he answered with the best explanation I could hope for:

Yeah, I can navigate in the dark really well too, but I think vision and time-to-night-vision plays less of a role than simply using all of your senses and being good at making accurate judgment calls about where things are, where they might be, and then using the way the limited light bounces off objects to get a better feel for the world around you and then using all of that information to build a mental image of the world you're navigating. When I wake up in the middle of the night to use the restroom, which requires walking down a short hallway outside the bedroom, I always walk with my eyes closed. I find that walking in the dark with my eyes closed is easier than walking with my eyes open, because then my brain does all the visualization and my eyes, which may see reflections or other objects, don't fool my mind into second-guessing itself.

And:

Of course, while walking with my eyes closed, I do use my hands as "sensors", to feel walls, the moldings on the walls, the doors, the door handles, the corners of walls, etc. All of that builds a mental image in my mind that I use to navigate.

(Yes I asked for his permission, in case you were wondering.)

So basically, seeing in the night is not all about if you can "see" in the darkness, although if you can that's cool. It's mostly about making the most of the minimal input that you have, mostly visual, because with the rest of the senses the input is certainly not minimal.

But yeah, for example if I have the chance, before navigating a dark room I try to turn on a light for a couple of seconds in order to get a very clear mental image of where all the stuff in the room is. After that it's just about moving through that physical space while continually updating your mental image, essentially.

Ever tried closing your eyes and guessing how many steps it would take to be from one end of the room to the next or ever tried navigating the house with your eyes closed? It's sorta like that.

Remember that you are trying to "see" in an environment where you don't have enough light to see, so with the small inputs you have, just draw a mental image of your environment.

Of course it's recommendable to go slowly, so that if you miscalculate something it doesn't hurt, although with experience this'll start happening less and less.

Just thought I'd share that, hope this helps. :)

]]>
Knowing something but not registering it https://greduan.com/blog/2014/08/09/ksbnri Sat, 09 Aug 2014 00:00:00 +0000 2014-08-09-ksbnri Ever have that moment when you realize something that you already knew?

I know it sounds kinda dumb, but it is not that rare for me to realize something that I had realized before, just never really thought about.

Like I remember one day I watched some Hatsune Miku concert, just one song, and I realized there was a hologram, and I was like "cool, they've got a hologram".

Some months later I watched it again and I was like "holy shit holograms! Technology has gotten to that point!".

Just an example.

Another one would be how I made an entire CLI app just to download a Gist's files and put them in the current directory. Just yesterday I realized I can just clone a Git repo of the Gist and that's that. I was thinking that the .zip file download was the only way to go, but it isn't. I knew about the Git repo stuff, I just never even registered it as an option.

Kinda short post, but just wanted to share that.

]]>
A week with Emacs https://greduan.com/blog/2014/08/08/awwe Fri, 08 Aug 2014 00:00:00 +0000 2014-08-08-awwe I have begun writing this in the 5th day of my one week with Emacs. Since a friend of mine was making the switch to Emacs, or at least seriously attempting I decided to take a challenge with him where we would only use Emacs.

The agreement was I would only use Emacs, for everything, with or without Evil mode. While he would use Emacs for anything that wasn't coding, as using Emacs would be a serious dent to his prductivity and he basically wouldn't be productive for a whole week.

We agreed to keep a journal of our experience with Emacs. I haven't read his so I'm not sure how detailed he is with his, or how dedicated, but I don't think mine will be as detailed as his, in any case.

So I decided to share my experience in my blog and sharing my journal in it, along with a more detailed version of the journal I suppose.

I put the whole journal in a file named emacs-journal.txt in my home directory. I won't pose the whole thing here, since it would just be a screen hog, so I'll put it in a Gist, here it can be found: https://gist.github.com/greduan/2f555993c1a537d8e7a5

After you read that, come back here for a more detailed version, or just skip the journal altogether if that's what you prefer.

Be warned, if you came here for a review or a workflow blog post you may not find what you are looking for here. I say "may" because I am just now writing this post and I don't know what'll come out of it.

In this post I think I will mainly share differences I have noted between Emacs and Vim, its users, the workflows found in both, how the experience was for me, a 1-2 year only-Vim user, etc.

Let us begin!

Why Emacs?

So a Vim user that can fluently think in motions and text objects, why the hell would he want to be a traitor and switch to Emacs?

Well one, as I said earlier, it was a challenge in order to help my friend switch to Emacs so that he wouldn't be alone and all of that, because being the only one that uses a certain tool is kinda sad, I should know, I'm the only one in my team that uses Vim, everyone else uses some IDE like PhpStorm (yuck!). Also one of the two that uses Linux as the OS instead of OS X.

Two, I am naturally interested in Emacs, seeing how I am the user of the archrival editor.

I also recently found this font package, but I can't really use it very effectively in Vim so I thought I'd check out Emacs. For the record I haven't done anything with this yet.

OK let's get to the meat of this blog post.

My observations

About what? Everything.

The users

The most glaring observation for me is the different kinds of users that use Emacs and Vim. Let me explain.

I feel like in Vim the user makes the changes fast, while in Emacs the user takes a bit of time in order to code some kind of solution for Emacs to do it for him. In Emacs that piece of code forever remains in your init.el file if you want and you can use it whenever, while in Vim if you want to make the change again you just do the motion of keystrokes again, or record a macro and save that somewhere.

Note: Do remember that I've only seriously used Emacs for 5 days at this point, so I definitely don't know the workflow of a 10 year Emacs user.

In Emacs you can customize I think pretty much literally anything. I don't mean the figurative literally BTW, I mean the literal literally.

In Vim you can customize to a great extent your text editing experience, but you can only customize your environment experience to the limits imposed by Vim's options and settings. Of course you can probably get very clever and do some very interesting stuff to customize Vim.

[Note: Now the next day, the 6th day.]

So the users have very dfferent mindsets. While in Emacs it is "how can I automate this?" in Vim it's "how many keystrokes can I find a way to skip?". This is brought about by the differences between Emacs and Vim, IMO.

The ergonomics

I'm just going to go ahead and say it. In my opinion, Emacs' default keybindings suck. Being a Vim user I found it super uncomfortable to have to go and find the Ctrl and Alt keys constantly. Maybe it has to do with how I press the keys, maybe not, I press Ctrl using my left pinky and Alt using my left thumb. I don't feel like those keys are very strange but maybe they are.

And no, I did not switch Caps lock and Ctrl, neither will I do it. I think I tried doing it at some point for Tmux, as I heard suggestions to do that, and remap the prefix to Ctrl-a. I did not like it, felt unnatural.

Instead I decided to use something like God-mode, which feels like less of a hack, however I haven't gotten used to using it every time I can so I haven't gotten much benefit from it yet.

So yes. Ergonomics. Freakin' work on them please, at least make the keys more natural. M-b is exceptionally unnatural to press when you use your thumb to press Alt, again that may be my own fault though.

The damn tabs

Why is it SO HARD to configure how tabs work?

I had tabs sorta figured out, just make everything be a hard tab and you'll be fine, but that doesn't work when you're working with Lisp, because Lisp and hard tabs are the bane of good code formatting. But IMO tabs work everywhere else better than spaces.

I'm not going to go through what I've tried, it wasn't a pleasant experience. In Vim it's not pleasant either but at least it's straightforward.

Notice from 2015: I figured it out. :3

Elisp

Elisp is cool. Configuring an entire editor with it is a concept I enjoy thinking about.

While Emacs is a HUGE piece of software, compared to Vim that is, it has a TON of code, Elisp and C code, all to give you a great piece of software that you can configure as much as you want without a second thought.

Light Table is not the next Emacs

This is a reference to a post I did previously, I was very excited about Light Table and ClojureScript and all that jazz back then and I didn't really know all that much about Emacs except what I had heard about it.

Yeah, Light Table is not the next Emacs, not even remotely close, it doesn't even work inside the CLI so that already makes it very different. lol

The package management

I won't write a lot about it, I just want to say it's not an ideal situation.

There is already a really great post about this subject.

Notice from 2015: I am informed that that blog post is now very out of date, and I myself can confirm.

I have spent time with both package.el and El-get, I personally prefer El-get so far, but package.el is still really great.

Afterthought

This blog post probably doesn't have a lot of flow, maybe. It was written over several days and it was basically just a rant, i.e. "say whatever you have on your mind".

I think it's quite noticeably I quickly ran out of stuff to say. lol

Oh yeah, there was no Evil-mode mentioned huh?

Also, since the 6th day I started using Vim for coding again, becaues Emacs was too slow.

]]>
A small project is not the same as a big project https://greduan.com/blog/2014/07/20/aspintsaabp Sun, 20 Jul 2014 00:00:00 +0000 2014-07-20-aspintsaabp That's quite an obvious statement isn't it? The one in the title. Well I didn't learn this until recently.

Let me explain myself, and how I'm not dumb.

A small project you can get a skeleton for in like an hour, you spend a bit of time brainstorming features and how you can simplify it and in an hour you have a skeleton, in 15 mins you have a project ready to start working on, you've figured out the README the license etc., you just have to start making it now, and that takes like 3 hours or less if you're good, depends on the project of course.

In comparison, a big project you don't plan out quite as much, since that's a bad idea to do at the start, instead you setup some tests (if you work Agile), you make some decisions on what to start with etc.

After 5 hours, you're not nearly done with the big project, while the small project you're at least halfway there if not done already.

If you're used to small projects, like I am, a bigger or slightly bigger project may make you feel slow, at least it made me feel slow.

I just kinda want to say that's probably normal and not something to worry about, if you haven't finished your idea in 6 months, maybe start worrying. lol

]]>
A projects page! https://greduan.com/blog/2014/07/14/app Mon, 14 Jul 2014 00:00:00 +0000 2014-07-14-app Notice from 2015: Whilst I am not using this page anymore, it is still available and this blog post is still here for historical purposes. This post is in the "delete whenever" list.

Yay I made another part of my website! :D

Just thought at some point something like this might come in handy so I made this website.

Can be found here: http://projects.greduan.com

It is also found in the list of links in my home page.

It doesn't have all of my projects, as you may notice, but it is a list of those that I personally think would benefit others and that are standalone from another application, like DocPad or MetalSmith.

Of course this list may change and also the requirements that I have for me to put them on there, but we'll see...

]]>
Several VLC interfaces https://greduan.com/blog/2014/06/19/svi Thu, 19 Jun 2014 00:00:00 +0000 2014-06-19-svi A couple of days ago I found out that VLC has several interfaces besides the usual nice GUI interface.

In Arch Linux installing VLC also installs a couple of CLI interfaces for you.

For example it installs nvlc, which is an ncurses interface, which means you don't need X11 to listen to that music or podcast or whatever.

It offers cvlc, which I don't know what it stands for but it allows you to watch a video without the fancy GUI. So you do need X11 running but it doesn't load the GUI and all that, just the video player and with some hotkeys I imagine, when I pressed the spacebar it paused so maybe the rest work.

And of course it offers vlc which loads the normal VLC but you can give it a path to the file or folder from the CLI.

]]>
Enable `pass` auto-completion in Zsh https://greduan.com/blog/2014/06/18/epaciz Wed, 18 Jun 2014 00:00:00 +0000 2014-06-18-epaciz Just thought I would share this tip with you since it would have saved me some of my time if I had this.

Add the following to your .zshrc file or whatever file it is that you want, just make sure it's loaded by Zsh:

autoload -U compinit
compinit

That enables auto-completion and loads it up. I'm not sure why I never had it enabled, but now it is.

You don't really need to do anything special anywhere else I don't think, that's the only thing I had to fix for it to work on my computer.

Hope that helps. :)

]]>
Acknowledgements in communication https://greduan.com/blog/2014/06/14/aic Sat, 14 Jun 2014 00:00:00 +0000 2014-06-14-aic I am going to talk about acknowledgements, what they are, how they work and why they are important. May be a short or long post. I'll say "ack" whenever I mean "acknowledgement" because that's a long word to type over and over.

OK let's get started. First, what is an ack? An ack is something said or done to inform someone that his statement or action has been noted, understood and received.

These can be "OK", "Got it", "Ah, I see", "I understand" etc. This lets the person know you got his or her communication.

Actions can also be an ack, for example applause, you are acking a person's performance and also communicating that you liked it. A thumbs up. A nod. All of these are acks. They convey different meanings along with the ack as well.

Now a person that gives good acks makes you feel good. You like speaking with a person that gives good acks. You do not like speaking with a person that does not give good acks.

Acks can be messed up in several ways, too. Like for example, your ack is not good enough, your ack is not heard or you don't even give an ack. This leaves the other person hanging, the person is left feeling insecure over the fact that you heard him, most often these people will repeat themselves to make sure they were understood.

You can also give too much ack. Cannot think of an example for that one, but it can happen. Leaves the person overwhelmed.

You can also give the wrong kind of ack. Ever been bothered by someone repeating something over and over? Did you say "OK I got it!" in an irritated or angry manner? That is the wrong kind of ack. Yes, you are letting the person know you heard him, but since you said it in a bad manner the person cannot really be sure you heard him, since you may have just said it to shut the person up. Plus it leaves a bad feeling in the person.

You can also give an ack for when something is finished. You were ordered or asked to do something, you ack that letting the person you understood and will do it and then when you finish doing it you let the person know you did it. This is a more complete ack, in a sense, because you are letting the person know that his order was carried on, that means his communication was heard and also the person no longer has to continue worrying if you did it or not.

There are all kinds of tricks and workarounds and facts about acks. You will learn them all with experience, I am just sharing what I currently know.

Find out if you give good acks. It is an important piece of communication, without which communication is a huge pain in the bum.

]]>
Arch Linux font tip(s) https://greduan.com/blog/2014/06/09/alft Mon, 09 Jun 2014 00:00:00 +0000 2014-06-09-alft Just thought I would share something that has been very useful to me recently, and that is some Arch Linux font stuff.

First let's talk about the Infinality-bundle+fonts. This is SO useful! It's basically some pre-configured font settings and fonts that make your Arch Linux font rendering so nice. If you want a real quick plug-and-play font config you can use this.

It was nice and all but I didn't like how many downloads I had to make in each pacman -Syu, granted I don't think they were too many but my internet is not so fast that I don't care about the size of my downloads.

To uninstall it I had to, IIRC remove the 'infinality-bundle' repository from my pacman.conf file, then I had to manually uninstall all the fonts and stuff from this bundle. There's probably an easy way to do this but I am not aware of it.

OK so that's one solution. There is another one which I really like since it works quite well which can be found in this blog post: From the mind of a nerd: Font Configuration in Arch Linux

I followed his steps and it works quite well. Some websites for some reason don't have font smoothing but they are not many. I did not follow the settings he has on XFCE, because I don't have XFCE installed but if I did that it would probably work flawlessly.

This solution allows me to not install anything extra and still have nice fonts so I like it.

Hope these tips help you out someday. :)

]]>
My slow switch to Emacs https://greduan.com/blog/2014/06/08/msste Sun, 08 Jun 2014 00:00:00 +0000 2014-06-08-msste I'm going to come out and say that I like Emacs. As a platform, not as an editor mind you.

I am using Evil as is probably to be expected. I have become so accustomed to Vim that without some sort of Vim emulation I cannot survive in another editor.

This is why I'm using Evil and I am slowly, but surely, using Emacs more and more, making little modifications that make it nicer and and more comfortable for me. I'll soon probably have to write a couple of plugins in order to achieve some Vim behaviour I like though, that'll be fun.

So I just wanted to share that. If I ever need to use Vim in a remote server or for pair programming, I'm open and if I need to use Emacs I'm also open, unless it's Emacs without Evil, in which case I am closed.

I have NO clue how Emacs users could use C-f for moving right by one, C-b for back one, C-n to move down one and C-p to go up one. It's crazy, their hands move ALL OVER the keyboard, and they say it's fine. Whoever says that is either bored or crazy or both. Or something.

]]>
New blog! https://greduan.com/blog/2014/06/07/nb Sat, 07 Jun 2014 00:00:00 +0000 2014-06-07-nb My blog got a new URL! No longer hosted on GitHub pages, is is now being hosted by my DigitalOcean server.

The way the website works now http://greduan.com is my index page where I put links to some profiles and to my blog, which is available under http://blog.greduan.com.

That is all. Hope you like the changes! This will certainly make it easier for me to do stuff with my website.

]]>
My take on Vim vs. Emacs https://greduan.com/blog/2014/05/31/mtovve Sat, 31 May 2014 00:00:00 +0000 2014-05-31-mtovve This is not meant to create any kind of flamewar or anything, this is just my take on the real differences, technical or otherwise.

Let's start with the fact that they are completely different things if you look at them, for very different publics. Emacs is the platform, Vim is the editor. Vim could never beat Emacs as a platform, Emacs couldn't beat Vim as an editor. And those are facts.

Vim has the most impressive things in it in order to edit text in the most impressive ways possible. You see some Vim users and they just move ALL around the place, sometimes it's even hard to see their cursor from their speed. Sometimes it's hard to figure out what they are doing because they are doing it so fast.

Emacs has the most impressive things in it in order to be able to basically do whatever the hell you want in it. Users can really have everything running on Emacs if they so desired. Here's a short list of some of the stuff Emacs can do by default:

  • Tetris
  • A browser
  • IRC chat
  • Calendar
  • Built-in shell
  • Async OMG!

Vim has trouble with async, but its community has found several ways to achieve async in Vim, vimproc.vim for example. I think there are several ways anyway...

Emacs handles async like a champ since the moment you install it.

Vim startups in like 2 tenths of a second, with several plugins installed, for me. Practically instantly with vim -u NONE.

Emacs, with emacs -q quite quick, maybe like my usual Vim config. But with my init file it takes one or two seconds. Granted my config really has to be cleaned up.

Vim uses VimL, which is slow, quirky and not all that powerful, although some people have done some really cool stuff with it.

Emacs uses Elisp, a variation of Common Lisp. It is a Lisp, what more is there to say? It's awesome automatically.

Now here is an interesting fact about Emacs. It technically has all the power and potential needed to make it as good as Vim at editing text, to an extent, except its interface sucks and isn't modal. There's a thing called Evil that tries to fix that. It does quite a good job, but for the advanced Vim users, like Drew Neil and Tim Pope, I don't really think it cuts it.

Vim users are, as bling once said, tenacious. You will most often find something be implemented in Vim first before it's implemented in Emacs, no matter how hacky or dirty the method is, they will most certainly do it first. Except stuff that is just plain impossible in Vim. Like this: https://github.com/zk-phi/sublimity

That is how I see it. I'd really love to see Emacs as a platform that runs Vim on top of it and that would practically be the best text editor.

]]>
You need to understand JavaScript callbacks https://greduan.com/blog/2014/05/29/yntujc Thu, 29 May 2014 00:00:00 +0000 2014-05-29-yntujc Notice: This post is irrelevant nowadays, had I learnt JS data and objects correctly since the beginning this wouldn't have been a real problem. Promises are WAY better anyway.

Here I will share something I realized I learnt only after learning it. That is the fact that if you want to program fluently in JS you need to understand and know by heart JavaScript's callbacks.

I do want to point out that I did know callbacks were important, but I didn't realize just how powerless you are when programming in JS and you don't know your callbacks.

So that's the lesson I've learned. Now I'm going to teach to you what callbacks are.

Let's start with the definition of callback, in terms of JS. A "callback" is, in the simplest of the English language, what to do after the function has been executed. That is the callback.

So let's look at the code:

function x(a, callback) {
	var b = a + 5;
	return callback(a);
}

What that code does is add a and 5 and give them to the callback. So this function would be used the following way:

x(5, function(bap){
	console.log(bap); // returns 10
});

Do you see the connection that is happening? Before I go on I just want to make a couple of points. The callback should be an anonymous function, I'm not sure if this is required or not but after reading a lot of code this is the only way that I've seen it, so I'm assuming it's the only way possible. Please correct me if I'm wrong. Secondly, the name of the argument that you use in the anonymous function can be named anything, also the variable that you give to callback can be named anything.

OK so to explain what is happening here, it's basically running function x() which is assigning the value of 5 + 5 to a var b and that var b is being passed to the callback. The callback receives the value of b and names it bap (in this example) and then logs to the console the value of bap.

They're as simple as that, but I never saw an explanation like this. I did understand that callbacks were what was done after the function was executed and the results were available, but I never understood the process that it took in order to give the anonymous function the value and what values were used by the anonymous function.

I hope this post helps you understand callbacks, even if just a little. Please correct me if I said anything bad or incorrect.

]]>
Neat trick for Vim keybindings https://greduan.com/blog/2014/05/23/ntfvk Fri, 23 May 2014 00:00:00 +0000 2014-05-23-ntfvk I actually found this out while looking at another person's vimrc while looking for some good Unite config, cause I'm not really comfortable with advanced VimL, which Unite for sure uses.

The specific lines are these: https://github.com/bling/dotvim/blob/0c9b4e7183/vimrc#L565-L580

This gave me an awesome hint, which is that you can actually set a key blank so that by itself it does nothing but with an extra key it does something. In this case it's the space key. And also that you could alias a key to another value.

So now I use it like this in my vimrc (changes I haven't pushed yet though, at the time of writing):

nm <space> [space]
nn [space] <NOP>

And a lot of my filesystem and buffer plugins use these keybinds. I may even set it up eventually so that it completely replaces my leader key, which I honestly don't use much anyway, nowadays.

EDIT:

Posted a question on StackOverflow and it seems it's nothing too special. Still a neat trick though: http://stackoverflow.com/q/23839528/1622940

]]>
Just switch to UTC https://greduan.com/blog/2014/05/21/jstu Wed, 21 May 2014 00:00:00 +0000 2014-05-21-jstu I'll just say it right now, I hate timezones. And daylight saving time, it can go to hell.

I like the fact that anywhere I am in the world (not near the poles) I know 8:00 the sun is up and 20:00 the sun is down. That's all fine and dandy. But I don't like and I don't think I'll ever like calculating timezones. Especially with daylight saving time, where I gotta take into account if it's on or out daylight saving time, plus my own, and they start and end at different times, it sucks.

I have meetings every Monday at 21:00 (9:00 PM) UTC. At this time we all spend some time chatting. For me that means 3 different hours at which the meeting can happen, because of timezones and daylight saving time. It can happen from 15:00 to 17:00. And it was a PITA to ask when it's starting, until I started using UTC as the reference for the time at which it's going to happen.

Let's look at what time is for a moment. We could keep track of it or not care at all, in fact if we didn't care at all maybe some stuff would be better, maybe. We'd use the sun and the moon to track the time, instead of saying "Meet you at 12:00 PM in the hall" we'd say "Meet you in the hall when the sun is at it's hottest". That's something we can easily feel, although we'd be off by around one hour, it wouldn't really matter because our civilization would be VERY different, where preciseness wouldn't matter that much, and when it does maybe we just stick together or set more secure times, like when the sun starts setting or something.

On the other hand, if we didn't have time we wouldn't be able to coordinate time globally, only locally where everyone knows naturally in how much time the sun will go down.

So time, IMO, has it's most use when used globally. But the way we use it today it complicates stuff way too much! Assuming there are no timezones, it's not that hard but still a hassle. With our current timezones system before I do anything with planning times I need to understand at what times in my timezone it's day during the other timezones and find somewhere where they overlap.

If we used UTC it would be very simple, one side says "I'm available from 12:00 to 20:00" and the other side knows exactly what time that is and it can say "OK let's talk at 16:00".

Another thing with timezones, we work at different days if on opposite sides of the planet. Right now it's the 21st for me, but for Sydney, Australia it's the 22nd. Look at this commit diff. These commits happened in practically the same 10 minutes, but because of timezones Git detected one of these commits being one day later.

Let's just all agree at the same time to start using UTC, drop the notion that before PM it's morning and after PM it's afternoon, drop the notion that at 8AM the sun rises and 8PM the sun drops and just learn the UTC time for these events in our own area.

That is all I have to say and this opinion won't change. Different languages OK, it's different cultures, but different times? Come on. We're all humans in one damn planet, I'm sure we can come to agree on the time.

]]>
My experience with the BSPWM and Sxhkd https://greduan.com/blog/2014/05/14/mewtbasc Wed, 14 May 2014 00:00:00 +0000 2014-05-14-mewtbasc Notice from 2015: While I don't use bspwm as my main WM, I do sometimes use it. But I use sxhkd daily as I tend to use minimal WMs that need it.

In this post I'm going to, as you might have guessed, talk about the bspwm window manager along with its partner tool that's almost impossible to live without, sxhkd a simple X hotkey daemon.

Some people may have trouble understanding the concept of a tree structure for the windows in a window manager, but basically, every window is inside a container, that container can act as a window or as a container of two other containers, each of which can act like a window or as a container of two other containers, each of which... etc.

This allows for very complex, advanced and simple to do organization of your windows.

Right now, TBH I can only work with these kinds of WMs because they're the only ones that make sense for my workflow, in which almost no virtual desktop is used for the same thing every time. The idea of a master window doesn't really work for me TBH.

However because of this tree like structure for organizing the windows it only has two templates, monospaced, which basically only has one window on the screen no mater how many there are on the desktop or tiled, which allows you to take advantage of all of this crazyness that is a tree structure.

i3 has a similiar structure, i3 describes its structure as a tree, BSPWM describes its own as a binary space, actually its description is "A tiling window manager based on binary space partitioning".

Anyway, my experience with it, super nice for everything except setting up Dzen2, my preferred program to use as a bar with my window managers. Had to spend quite a bit of time with this.

If you have experience with the shell you'll probably notice right away why sxhkd is more powerful than something like xmodmap.

Here's a nice GIF by windelicato about why BSPWM: https://raw.githubusercontent.com/windelicato/dotfiles/master/why_bspwm.gif

Final verdict, it's nice but still in the trial stage, may go back to i3...

]]>
Write down the day ahead of you https://greduan.com/blog/2014/05/08/wdtdaoy Thu, 08 May 2014 00:00:00 +0000 2014-05-08-wdtdaoy I'm just sharing a neat little trick I've setup for myself to keep myself focused on what I should do during the day, or at least help keep myself focused.

I have one of these notepads, that don't really count as notepads. You can rip off each page and the pages are held together by a red plastic which was once sticky but is now dry. If you don't get what I'm talking about just think of it as any other small notepad.

What I'm doing is, I wake up, I turn computer on, and before doing anything else I write down in this notepad, in one page usually, what I gotta do for the day. So I write at the top in big letters the title of the subject, like if a client is named "Bob" I write in big letters "Bob" at the top.

Then I make a list of the stuff that I have to do. I separate each item with a dash (-) and I leave like 1 or 2 centimeters of space to the left of the dash (for Americans and UK people that's around 1 inch). This space is in order to be able to communicate stuff to myself with symbols. Little bit on that later.

So here's a list of stuff that I wrote for myself for my client's site:

Bob:

- performance meta 'title'
- fix videos centering
/ homepage SEO

Add that extra space to the left of the dashes yourself.

So what this means for me is that there are reminders, two of them are todo and one of them is done. - is todo, / is done.

That's it. That's all I do and it keeps me on track. And if I think of stuff to add to this, I just do. And I do it by least time-consuming to most time-consuming. At the end of the day I just rip this page off, or scroll to the next one, if there's something leftover I write in the page for tomorrow. This of course can be done, before or after sleeping. I tend to do before, but after works just as well I find.

Now, one thing with this is that you do NOT need to detail exactly what you want to do. Just little reminders that tell you what you need to do. Preferably make it one-liners.

So what's with the space to the left? That's just for symbols. I'll explain.

If I'm working on that or I got some part of it done but I need to stop for whatever reason, I add a .. This tells me I started on it but didn't finish. If I later start again I add a |.

You get the idea, you can use any symbol you like, I just like using symbols that make sense for me, use symbols that make sense for you.

]]>
What the gut feeling is for me https://greduan.com/blog/2014/04/30/wtgfifm Wed, 30 Apr 2014 00:00:00 +0000 2014-04-30-wtgfifm Notice from 2015: Why even try to describe this without body language? Such a dumb idea from me.

I acknowledge that the gut feeling works different for everyone, and it seems women have a different kind of gut feeling called a "woman's instinct" or something like that. But I'm gonna talk about my experience with my gut feeling and how maybe you could start using it.

Now, in my experience, the gut feeling has not been what I feel in my gut, so let's start by putting that out there.

Now, the gut feeling for me has been an interesting experience. The way my gut feeling comes for me is the first thought that comes to mind pretty much.

I am better at judging people I just met by gut feeling than by analysis. The first time I meet someone I trust my gut feeling as what I work off from to analyze. No matter how nice or how insensitive a person seems, it's the gut feeling that tells me if they could be good friends, bad friends, or the kind that seems like a good friend but screws you over behind your back.

For example, if I meet someone and they're very charming and all, but something at the back of my head pokes at me like "I don't like this person", I will trust that poke, because most of the time it has been right, and if it was wrong, there's always a second chance, the analysation stage. BTW, one of the reasons I don't hangout with as much people as I probably should, gut feeling telling me no.

But that's just meeting people. Eating food works similarly, except this one could save my life quite directly. I have never eaten something that gives me a bad feeling. I don't judge meat by how it looks, I judge it by my body saying yes or no.

I should mention at this point, gut feeling is not always everything. You know, you're not gonna cheat on your wife or girlfriend because your body is saying yes, it's times like these that the mind has to come in and put the body under control and say "NO".

Have you ever had that feeling where you just feel like you have to do something? Or you've done it before you realize? I haven't had quite an like that experience myself, but I was witness to one.

While in Argentina there was this pizza delivery-man, he was leaving for a delivery and had the pizza on the back, whatever. He stored the pizza and got on the bike (a motorcycle), as he was starting it, with the most fluid of movements he got off. Like 1 second later a car crashed against his bike (lightly). Now what's interesting is that the car came from nowhere, he could not see it or hear it, I didn't myself as I was facing the same general direction he was.

When the car crashed the guy was bewildered, he looked bewildered at least, he had no idea what had just happened. He just got off his bike without knowing why and a second later a car crashed against his bike.

The way I see it, that's gut feeling and your body answering without giving you a choice. My mom experienced something similar, had she not scooted over by like 10 or 20cm, a car would have hit her, this is one time she was coming out of a cab, although I didn't see this one myself, it's a story she shared with me.

This is a long post, but in the end I guess the point is, don't ignore your gut feeling. And everybody has it, those that have it more just know how or when to listen to it.

Next time you met a person, trust your gut feeling, it'll probably something so basic you'll almost miss. For me most of my gut feelings are the first thought in my head when I am given a choice of some sort.

]]>
A technique to remember small stuff https://greduan.com/blog/2014/04/30/attrss Wed, 30 Apr 2014 00:00:00 +0000 2014-04-30-attrss This post is to share something that I've found quite useful, and seems to work quite consistently.

Have you ever tried to remember a name? And I mean one that you know, but forgot when you need it, not one that you heard 10 years ago and there's probably no chance you'll remember.

I'll give you an example so that you get the idea of how it works.

The other day I was recommending to someone a certain artist, I like the artist and all, but I couldn't remember the name. I couldn't remember it, I tried for several minutes.

What I did is ask myself "what's his name?" and I consider the first things that come to mind, for this example let's say the letter "T" came to mind. So I say, OK, that's his name's first letter.

Then I say, starting with "T", what's his name... "Toshi" comes to mind but that's not right. I did notice it is closer to it though, beause it resonated with my memory (best way I can come up to describe what I felt).

I decided something was missing, so I repeated to myself "Toshi" over and over. Then suddenly I realized his name was "Toshio" and I was like "I got it!", but I still felt like something was missing so I kept repeating "Toshio" to myself. After 15 seconds of that I figured it out, the name I was looking for was "Tokashio", and I was content with that so that was it.

But the point is, I went several times with the first answer that came up. This is usually good to remember all kinds of stuff. Ever looked for what you were talking about before you got into a heated argument? This works quite well "well we were talking about ____", after that most people would say "nah that was much earlier", maybe it wasn't, work off from that and you'll soon remember what it was. Unless the argument was completely unrelated to whatever started it.

Anyway, I hope this helps. This doesn't describe it very well, but in a short amount of words it's basically asking yourself "what is it?" and the first thing that comes to mind will probably be it, and continue with it until you remember the whole thing.

With a name you could go letter by letter, "did it start with an E?" if your gut feeling says yes, then it is, then go "the second letter was a T I think..." and continue like that until boom, you've got the whole name.

EDIT:

I was talking with a friend about memory but I didn't mention this technique and at some point he said "like in layers" and I thought that was the best way to describe this technique. Do it in layers and don't deny what comes up, see where it takes you. :)

]]>
Out of sight is not out of mind https://greduan.com/blog/2014/04/27/oosinoom Sun, 27 Apr 2014 00:00:00 +0000 2014-04-27-oosinoom So right from the get-go you know I disagree with this mind set of "out of sight, out of mind". My titles are clever like that.

Anyway. First I'll start by saying. I do see the point. I understand that if you're not seeing it then it may not even be a consideration. This is true for stuff like... If you hire someone to take the trash out for you, suddenly you don't care about the trash that you generate, because somebody else is doing it for you.

But if that isn't the case. Then no matter what you do it's gonna be on your mind. And here's where this idea breaks down.

It's great for services, like I clean the house. Eventually you're gonna forget that it was even dirty, if I come daily.

Now what happens if... if you like your dog and it disappears. Even if it is out of sight, it is not out of mind. I mean if you're a worrywart you're probably sweating bullets just thinking about what could have happened to him.

I could give tons of examples, crappy ones probably, but the point is. Just because it's not on your sight, it is not gonna disappear from your mind. That's how the mind works. You can't avoid the fact that you murdered someone just by hiding it in the dumpster or something, you know?

Anyway, my point is, no matter what you do, it's gonna be on your mind if you consider it not solved or your mind considers it something that still requires attention. You can put all of the trash of your home beneath your bed, and just because you know it's there, it's gonna bother you, even if it's out of sight.

]]>
Installing Arch Linux on a Dell Vostro 1500 https://greduan.com/blog/2014/04/17/ialoadv1 Thu, 17 Apr 2014 00:00:00 +0000 2014-04-17-ialoadv1 Notice from 2015: Although I don't use Arch anymore, last time I had to install it it didn't have these problems anymore, except the wifi card problem.

This is for future reference and also for other users that may run into trouble with this, here's how you can fix these issues. It's likely you don't even own a Vostro 1500 though, this thing is like 7 or 8 years old (2007).

Anyway, here are the issues we're gonna cover and how to fix them:

  • The wifi card firmware issues
  • Laptop's backlight does not work (at all)

The wifi card

All right. So this seems to be a sort of popular problem on Linux, I even have this problem on Windows XP.

OK so let's do the basics first. Make sure this is the problem you're having. There are plenty of guides out there on the internet on how to fix this but I am gonna share a solution with you that works offline, which has been a lot of the time I install a distro because Ethernet is scarce for me, if only I had a long Ethernet cable... I have a 2 meter one, which sadly isn't enough so I gotta sit on the floor while installing. Which you know, is OK, just a little bit uncomfortable.

Of course the way to do ths is get the necessary stuff while you have the precious internet. Find the necessary firmware somewhere on the internet OR download the AUR package for this, b43-firmware. I've done it with only the first, I thought I would need it so I downloaded the second but in the end I didn't need it thankfully.

So this is just the solution I have, of course when you've installed Arch you would mount the USB with the AUR/firmware and install it. The copy of the firmware I have has instructions but I can't seem to find where I got it from. The AUR you'll just have to know how to install AUR packages.

I think I also had to blacklist the bcma module. Maybe, the blacklist line is there but I'm not entirely sure if it's necessary.

The backlight issues

All right. So this one completely baffled me when I came across it because I couldn't figure out what the problem was and there was some intense Googleing happening. Not to mention I could barely see anything on the screen because it was really dark.

So the problem seems to be because of wrong priorities when loading the stuff that manages backlighting. As explained in this bug report for Debian(?).

What I tried there had worked so here's how I fixed the issue, the process or solution may not be exactly the same for you.

Add acpi_backlight=vendor to the bootloader's kernel line, which is managed by whatever bootloader you have, I use Syslinux so that was quite painless. And blacklist the dell-laptop module.

The whole reason that works and stuff is explained in that bug report so read it if you're interested, or not if you're not interested.

All right, I hope that helps. This will certainly save me hours in the future, I hope.

]]>
Text editor categories https://greduan.com/blog/2014/04/12/tec Sat, 12 Apr 2014 00:00:00 +0000 2014-04-12-tec So something I have noticed recently is that text editors seem to have their own categories now. And I think this will add oil to the flame wars of the godly text editor wars. Maybe.

I am not going to speak about IDEs, since they hold no interest to me and they cater to a more specific type of users. Some of the categories below can be IDE-ized.

And just so you understand what I'm talking about, here's a quick list of the text editor categories:

Powerful text editing:

(Yup, only two, those are the ones I know.)

Powerful customizability:

Easy to use, but powerful:

Simple but useable:

  • Gedit
  • Notepad
  • nano
  • etc.

So let's go over each of these and see the differences for me and things that seem to stand out between these.

Powerful text editing

Or the Vim category. Here are the Vim types, clones etc. Their job is to get the job done as quickly and as easily as possible. Want to copy the line yy, BAM. Open a new line above or below the current line O or o, BAM.

So their job is to get the job done. They emphasize powerful text editing. Getting stuff done quickly. Their more of an editor than an IDE environment. They only edit text, as best as possible, and that's all. They are usually keyboard-driven as that's most effective.

Powerful customizability

Here is the monster that can do anything you throw at it. The Emacs category. It could even emulate the rest of the categories if it so wished. It's job is to cater to the most intricated users as best as possible.

Want it to only save if the file is over a certain file size, feasible. Want it to save the file, make a backup of it, compile it and make a backup of that, again feasible.

And here's what's up with them. They have special structures or the choice of language to configure them is super powerful. They seem to prefer Lisp as a language is what I've noticed. Emacs has TONS of stuff to achieve this goal. Light Table has it's own structure that could do basically anything, and it's built on top of a browser (basically) so you program in the powerful DOM.

Of course Light Table has it's own power which will revolutionize programming, IMO, which is live and alive feedback.

This is my personal favorite, along with an emulation of the Vim category.

Easy to use, but powerful

This is a category I was fond of for a while. The ST (Sublime Text) category. These are the powerful, often beautiful text editors that seem to cater to web designers, new developers, more normal users that need some kind of oompf in their text editing.

They prefer to be elegant, well looking, user friendly, and emulate to some level the other categories, especially the Vim category.

They can easily be expanded upon to add new stuff to them, often getting them to near IDE level, but still maintaining a clean look and powerful functionality.

The users on the above categories may not be very fond of this category, as it may be a bit weak for their liking. Atom, however, may be liked by the Emacs category users it seems. Haven't tried it myself but it seems like it'll be a more Emacs category text editors...

Simple but useable

And finally we have the barebones text editors. The Notepad category. They are often not too powerful, but they can get the job done, often well enough.

They don't have that much to offer but you know, they work and they save you in a pinch sometimes.

I don't think these will be very involved in text editor wars, as they often don't care, most probably.

]]>
My experience with SolydXK (X) https://greduan.com/blog/2014/04/12/mewsx Sat, 12 Apr 2014 00:00:00 +0000 2014-04-12-mewsx Here I'll share my experience with SolydXK, a Debian based semi-rolling release. I'll just try to keep it short but still providing enough info for you to have learned something from this. I think I've spent almost 2 weeks with this distro.

This is my first taste of Debian in general, so if I point out SolydXK has something, although it's obvious because it's a Debian distro, please excuse me.

So let's get started! First the installation...

The installation

I put the SolydXK Multi DVD ISO into an 8GB USB using this guide (which is my favorite guide BTW, as it's just a reference).

The installation was the most pleasant installation I've had of any Linux distro, even something like Ubuntu. Besides the fact it was clear on the choices' purpose, it looked nice and other things, most importantly of all for me is that it supported the dreaded b43 wireless card drivers, which is a huge plus for me cause I don't have Ethernet readily available. I had Ethernet while doing this though, and the fact it recognized it needed to install it was nice.

If I don't have Ethernet though I have on my USB some firmware files readily available to copy them where they need to go. :P

Kinda sad that it doesn't have a super minimalistic version, which my Arch Linux in me cries about, but I'm willing to live with it.

Here is a HUGE con with this one though, maybe will get fixed in the future, but it doesn't figure out there's other distros installed on the same HDD, and deletes the Grub entries for those (although the data is untouched).

The general experience

I would give this a very good rating. Nothing posed any problems really, yet.

To switch to the i3 window manager I just all I had to do was install it with # apt-get install i3 and choose it from the list of setups (at the screen where you're asked your password, one of top right icons). Very enjoyable.

It comes pre-installed with some stuff, Firefox, Thunderbird, Flash, Libre Office and some other stuff. Which is fine with me since I use all of those, it does have some other stuff I don't think I'll ever use though, probably.

The update manager is very nice. It just updated today (as I'm writing this) and it's nicer now, since the update manager got an update. :)

Final thoughts

Very nice. If you want a rolling release Debian, I'd go with this if I couldn't use Arch for whatever reason, granted I haven't tried other Debian distros.

BTW, I did not explain core mechanics that you've probably already read about, this is just to share my experience, not really a review or anything.

]]>
Disconnecting from the DocPad community https://greduan.com/blog/2014/04/06/dftdc Sun, 06 Apr 2014 00:00:00 +0000 2014-04-06-dftdc So this is something like an official announcement. The original announcement can be found on this GitHub issue: https://github.com/bevry/docpad/issues/821#issuecomment-38612150

That's it. I'm super lazy so I'm just linking to the original. lol

EDIT:

This no longer applies. Ben has found a way to reignite my interest in it. And since now I spend much more time in Node.js DocPad is a big focus point for me.

]]>
Read stuff you have read before https://greduan.com/blog/2014/04/05/rsyhrb Sat, 05 Apr 2014 00:00:00 +0000 2014-04-05-rsyhrb I dunno if you have experienced this in the past, but I know I have. It's a very curious thing indeed; Have you ever read something about technology or whatever field you are in, a month after you did so originally, and suddenly a lot of stuff makes a ton more sense?

So my very clear reasoning for this is, you study something, OK got it, later when you study it again it's much clearer and obvious and you can think in it much better because now you have experience/more knowledge to link it to. So before it was a stray piece of info, now it's clear how it interacts with everything. Or something along those lines.

So anyway, read stuff you've read before, as long as you've learned other stuff of the same subject or you have been working with the same subject, stuff will make more sense and will be clearer in your head.

]]>
Real science, not bullshit https://greduan.com/blog/2014/02/22/rsnb Sat, 22 Feb 2014 00:00:00 +0000 2014-02-22-rsnb This one should be short. But the point is, WTF. It has always pissed me off in anime, manga etc., even IRL, that the characters can't believe what's actually happening right in front of them.

For example. If in front a scientist a glowing light appeared they would say it's not real and it can't happen, it can't be happening. It cannot be explained thus it has not happened and isn't happening.

This defies science itself. Science is not about "can it be explained?", science is about "if we do this 100 times, does it always happen the same way?". That's science and not bullshit. If it's happening in front of you, whether or not you know why, whether you can explain it or not, whether you have any idea what's going on or not, it's happening.

If something works every time, and it can be constantly proven a certain way, whether it's right or not, whether it breaks some "laws of physics" or not, it's happening.

I bet if something breaks the speed of light they'll say that's impossible because the speed of light is unattainable except by light. Even if it's happening in front of them.

BTW, the laws of physics thing, pisses me off how they are called "laws", but they are only a set of facts that have been proven to be true most of the time, not going to say always. If it suddently turns out gravity repels instead of attracting, that's not possible because laws of physics.

According to the Oxford English dictionary, "is a theoretical principle deduced from particular facts, applicable to a defined group or class of phenomena, and expressible by the statement that a particular phenomenon always occurs if certain conditions be present."

So it can be changed, but it seems they are set in stone and will never change for some reason.

I dunno. Just pisses me off, that is all.

]]>
My explanation of The Zone https://greduan.com/blog/2014/02/19/meotz Wed, 19 Feb 2014 00:00:00 +0000 2014-02-19-meotz **Notice from 2015: In retrospect, this seems kind of obvious, doesn't it? Feel free to skip this blog post cause it doesn't help you in life. lol

Also in this time of my life I had a tick of saying "so", so get ready for that. lol**

So. A lot of programmers talk about "the zone", this high performance mode that programmers get into is one that the most pro of the athletes also seem to experience. It is described as a "zone" where the people that experience it are just so concentrated that nothing distracts them and they can get great work done very quickly.

Most scientists can't explain this and the best explanation I have seen of this (IIRC) is that it's a result of "time slowing down", where your senses just get so sharp you can feel time slowing down, or so it is described.

You may notice I am mixing two kinds of "zones" here, the programmer's zone and the athlete's zone, this is on purpose, you'll see later on why.

Hammock Driven Development

So, I got thinking about this originally because of Rich Hickey's Hammock Driven Development. I got thinking about his idea. There is an excellent summary by the website Data Sorcery With Clojure which can be found here: http://data-sorcery.org/2010/12/29/hammock-driven-dev/

Here's the image:

Hammock Driven Development cheatsheet

Now. I disagree with him partially. I do not think it works that way, but that's certainly seems to be the result. So it works, but not for the right reasons, you could say.

So Rich is right, partially. Results, OK, reasons, nein.

My theory of Hammock Driven Development

So how Rich explains it is that there's basically two minds. Evaluation and Preparation. Probably don't go by these names in Rich's world, but you get the idea.

Now here's where I disagree. There's only one mind in my book, it just has different priorities, now let me explain that cause it's not that simple.

Attention units

For now assume attention units are just some kind of unit of concentration or something, I'll explain it soon enough.

OK, so, here's my idea of how the mind actually works, in terms of the zone. One mind, different priorities.

So let's say I'm just standing still. My mind at this point is doing all kinds of shit. Keeping track of bodily functions, breathing, blood pumping, all of the organs that process food and stuff, recording perceptions and evaluating them, keeping balance of the body. All of this shit is happening real time, often so quickly and so easily we don't pay much attention to it, but it's still happening. It is still doing all of that stuff.

Now, consider when you are sleeping or just relaxing. A lot of these senses are dulled or just completely shut down. When I'm sleeping all that's happening is breathing, blood pumping, and some bodily functions, but in general my body is quiet. My senses are practically hibernating (in computer terms), just awaiting the call that is "I've slept enough" or some kind of waking up call, of any kind.

Now, consider the mind's availability in both of these situations. In the first it's busy, in the second it's got free time. People often consider dreams as just your mind processing and organizing the stuff that happened during the day, that might very well be it entertaining itself, but that's another topic.

OK so now, what are attention units? Let's say you are... thinking about cats with your eyes closed while sitting down or laying down. Most of your attention units are on the cats, that's what you're thinking about, some are on perceptics etc. There are also some on what you're going to eat next, mental memos, that call you need to do right after thinking of cats etc. Everything you have any attention of any kind has some of your attention units.

Some people have more of these attention units at their disposal, because of their environment, their natural "talent" (not my favorite word to be honest) etc.

Now, according to Rich, all the problems we've solved have been solved by the Evaluation mind at some point, and while we consider ourselves smart because we think of solutions fast it's just because at some point this problem had already been solved by the Evaluation mind and you just need to call it back to your Preparation mind. Rich is actually quite vague IIRC on this, but that's the general idea AFAIK.

So here is my opinion, that's wrong. It's one mind, just different priorities at different times with different problems and different amounts of attention units. Turns out while sleeping it has more time to spend attention units on your problem, thus solve it.

People that are good at maths (without being some sort of parrot) just have attention units at their disposal... and practice. They have these attention units in order to have "variables" in their mind of number values that they need to keep floating around. Plus good mind processor.

Summary

  • Not two minds, only one.
  • Mind has different priorities.
  • While sleeping the priorities are few, thus more time for your problems.
  • While awake, you can still solve problems from scratch, speed depends from person to person...

Of course, this depends greatly on the person. His emotional and mental status and aptitude.

That's just my two cents.

P.S.: Forgot to talk about the zone. Oops. Basically, just all of your attention units are on what matters to you, thus you have more time to think and nothing distracts you since your attention units aren't on that.

]]>
The world of Window Managers https://greduan.com/blog/2014/02/04/twowm Tue, 04 Feb 2014 00:00:00 +0000 2014-02-04-twowm Note from 2015: Nowadays I don't use any of these except bspwm and/or 2bwm sometimes, nowadays I use swm+wmutils or something really simple like that.

Let me start off by saying that I'm writing this at 23:00 in Switzerland's time, so please forgive any stupidities. Usually I go to sleep at 22:00 or something like that. Really need to go to sleep for tomorrow...

Anyway. I'm going to be talking about WMs (Window Managers) in GNU/Linux, those that can be found in Arch Linux.

TL;DR: I went with XMonad for reasons.

So the TL;SR (Too Long; Still Read) version will start by me saying that I tried almost, if not all of the WMs. I can confidently say I'm dissapoined there isn't a single stacking WM that is keyboard controlled, besides the mouse. That would be my ideal setup. But there isn't, so I tried a lot of dynamic WMs.

A short list of those I tried is the following:

Those are the most notable ones. The ones that I honestly tried for a while and can give a fair review or opinion for.

Alopex

Let's start with Alopex. Alopex is the one I like the most after XMonad. I've used it for at least... 2 or 3 months or something.

It is very nice, almost ideal, but it lacks the configurability I like.

awesome

Another one I really liked. I used this one for... 1 or 2 weeks, give or take a few.

I liked it but it's too heavy for me. Also I don't like Lua and the config seems to change every single update, which isn't good IMO.

BSPWM

This is the one I was most excited for TBH, but I couldn't get Dzen2 to work correctly with it so I had to give up. Super powerful window management though, I suggest you try this one before XMonad.

XMonad

Finally XMonad. The king of tiling WMs it seems. Atleast for me. What makes XMonad unique for me is that it's written in Haskell, the language I have my eye on, and that I'm currently trying to learn, to a degree, to see if I like it.

This is the one I went with because of reasons.

]]>
My trip to Switzerland, part 1, getting there https://greduan.com/blog/2014/01/15/mttsp1 Wed, 15 Jan 2014 00:00:00 +0000 2014-01-15-mttsp1 I'd like to make it clear that this will probably be writen over several sessions over several days. I've mentioned several times that I won't have good internet during the time I'm over here, so I'm sticking to that. Sorry if some info is inconsistent, I'll do my best to keep it consistent.

Also the time and date formats will be kept as near to the Switzerland method for written time as possible, since I like it. I.E. time is in 24 hour format and dates use "." in between and it's a DD.MM.YYYY format, not the crazy MM/DD/YYYY format USA uses. Swear to god all they need to do is make a USD worth 99 cents and they will have officially messed up their entire numerical system.

Also I'll be skipping all the drama and just get to the point. I won't add unnecessary info that doesn't really matter, though people would usually tell you all about it. lol


This was written the 15.01.2014 at 11:40.

So we got to the airport and we just checked-in, no troubles, we ate some and then we just waited to board the plane.

I'd like to point out at this point there was like a 50 person line to get on the plane. We were able to skip that thanks to my magic of my seat being moved and letting me and my family just skip the whole line. Yay for magic!

We flew with Lufthansa airlines. It was a really nice service, the staff was very nice and knew very good English. Also everybody was good looking (but that's to be expected from an airline, AFAIK).

We arrived ahead of schedule, I'm told, but we left behind schedule, I'm told. So I guess I'm just really good at planes. lol

Then we took another air plane but this one was just a 45 minute flight or so.

Next step was to take a bus to the train station and ride the train to the Grenchen Nord station. So we missed our stop, not really but we didn't know that the doors had to be opened manually (by pressing a button) so we missed the stop.

We contacted our friends over here and they told us to just take the train to the Grenchen Süd station, which was actually nearer to our destination (but required a train switch).

So we rode the train to Grenchen Süd, this time we knew how to open the door, got off and familiar faces were there so all is well.

Got a car ride to the place we're staying at, unpacked a bit, took a shower and went to sleep.

And we're back to the present time of me writing this. I actually just woke up and I have no internet, everybody else just started waking up.

OK so now to the all important lessons (that I can remember) learned of every trip:

  • For god's sake keep calm! It is not nice to be nervous or to be around someone that's nervous.
  • Travelling on air planes isn't as special as it used to be. Now there's screens on the seats and all kinds of stuff to cheat with getting entertainment.
  • Buses and trains require you to open the door, they will not open alone at every stop. Know where you're going and at which stop it is you're getting off.
  • If on a new area, keep track of what people are doing, it will help you figure out what to do yourself.

And those are all I can think of. This was written in one go so it should be fairly consistent.

]]>
A love letter to Arch Linux https://greduan.com/blog/2013/12/13/alltal Fri, 13 Dec 2013 00:00:00 +0000 2013-12-13-alltal Notice from 2015: Well I sure did abandon you again, didn't I? Now I'm using OpenBSD. lol

This is a love letter to Arch Linux. It's the first time I write something like this, it was very fun. If you see anything in between square brackets ([]) it's an author's note. Note that some stuff is exaggerated for the drama but my feelings are still true.


Arch, I remember it clear as day how I abandoned you in July for a Mac Mini. I remember typing the last sudo systemctl poweroff you would see for over 3 months. I remember the slowness with which you turned off that last time.

I remember the speed and enthusiasm with which you booted up after months of hanging around the modem in storage, only to see your sadness when I turned you off quickly as I was going to install Windows 7 over you. [Was going to lend computer.]

I remember downloading your torrent again a week ago, I remember the speed with which you downloaded and with which you dd'd to an 8GB USB.

It is completely understandeable that you had a tantrum when I came back to you, it is completely understandeable that you gave me a hard time with the internet.

And of course, the modem, it is completely understandeable that you have trouble with him, it is completely understandeable that the modem rejects a second connection without being reset after you left him as soon as I tried to install you again. [This is a reference to the fact my damn modem has to be restarted every time this computer wants to connect to it.]

YET! After convincing your sister to help me (ArchBang) you were willing to work again for me. And now every time I boot up you're there to welcome me home.

I will always love you, Arch. I promise I won't be promiscuous again. I promise I won't leave you again for another OS. I promise I will look after you, even when I get new computers. You're the only one for me.

]]>
Switching from Zsh to fish https://greduan.com/blog/2013/11/13/sfztf Wed, 13 Nov 2013 00:00:00 +0000 2013-11-13-sfztf Notice from 2015: Nowadays I don't use bash, fish or zsh, nowadays I use good old (m)ksh because it's POSIX compliant and really fast.

In this post I'll be talking about my experience with switching from zsh (Z shell) to fish (Friendly Interactive shell). I'm not gonna talk about how or why it's better than the other shells, I'm only gonna talk about the process of switching for me.

Where to start... well, how about starting with the fact that I tried to do the switch before? I did try to do the switch before, I don't remember exactly what problems I had last time but I'm pretty sure it was something with Vim or something.

I decided to try the switch again taking advantage of the fact that a new version had been released recently (v2.1.0). Seemed like a good idea to try again, also the fact that I like the features it has, of course.

The switch wasn't as problem-free as I wished but it was quite a smooth process.

First, I was running across problems with Vim which I imagine I wasn't able to solve last time, but I solved this time. The problem was a startup error that Vim was complaining about not finding a certain file or something. I just had to add the following to the top of my .vimrc file to fix it:

if $SHELL =~ 'fish'
	set shell=/bin/sh
endif

Basically, if Vim can detect the $SHELL variable is fish it'll tell Vim to interact with the current shell as if it was sh. As I understand it, don't take my word for it. That fixed that.

Another problem I was running into was that Emacs was complaining about not being able to find the package.el file. The reason for the error was that I was opening an old version of Emacs (v22.x). This was just a matter of updating the $PATH variable, that was done with the following line of code in my config.fish file:

set PATH /usr/local/bin /usr/local/sbin /usr/local/share/npm/bin /usr/local/opt/ruby/bin $HOME/bin $HOME/.tmuxifier/tmuxifier/bin $PATH

Umm... Those were the only real note-worthy parts. The rest were just a matter of translating zsh to fish.

]]>
Light Table, the new Emacs https://greduan.com/blog/2013/11/07/lttne Thu, 07 Nov 2013 00:00:00 +0000 2013-11-07-lttne Notice from 2015: In the future I realise the errors of this blog post, including the fact that I was trying to compare an extensible editor to Emacs, the most extensible of them all. I wrote about it later on briefly in a blog post.

The title may be misleading so I'm gonna make it clear right now. I couldn't come up with anything better. It's gonna stay like that, so you can make suggestions for a better title but I won't change the title.

So let me explain why I chose this title. That's what this post is about.

There is a popular saying that Vim users love to use against Emacs users in the holy editor wars. That saying is "Emacs would make a great OS, if only it had a good text editor" or something along those lines.

And I can sort of agree with that. Emacs' editor is not superior to Vim's in almost any front. But let's look at the bright side of that saying.

They're criticizing the text editor. But they're acknowledging it as an OS. In other words it apparently can be extended immensely. And it has proven to be able to do just that. It has just about anything that can be done in text. Email reader, news reader, chat, calendar, you name it. And if it isn't already implemented, you can do it yourself! ;)

And of course, it can be extended immensely towards developing. Debuggers, the CLI, automatic compiling functionality, people have made it an IDE as well. So its limits are practically none. Oh and let's not forget about the language you use to customize it. Emacs Lisp, A.K.A. Elisp; Its own dialect of Lisp. You can't get a lot more customizable than that.

Except you can... Enter Light Table!

Light Table is one of the new kids on the block. It is coded in ClojureScript, a Lisp dialect that essentially compiles to JS. The other new kid is Lime, but he still hasn't got a working frontend (at the time of writing).

Chris Granger (the main dev of Light Table) has created a unique architecture with Light Table. (As I understand it) you can add or remove anything and everything. Well maybe not everything since you'll always want some stuff there but you know what I mean. Think about the ECS (Entity Component System), that's almost exactly what Light Table has as an architecture. It is the first text editor of its kind in this sense.

He has created a very interesting system with Light Table where the basic text editor is loaded by default. All of the text editor you get by default are just behaviours. These behaviours can be added, removed, disabled, enabled. I'm sure you get where I'm going with this.

It could be anything and everything with due time! It could be the first real competitor for Emacs as well in terms of customizability. Think about it, it's written in ClojureScript, a Lisp dialect, you can add or remove behaviours, to the point where you can make it a NotePad or a browser if you liked.

One thing to note is that it is using node-webkit (Notice from 2015: node-webkit has been renamed to nw.js) as it's platform. It allows you to call Node.js from the DOM. That's right, Light Table the text editor with almost no limits is technically a web browser.

That means JS is its limit. If you can't do it with JS (and Node.js) you can't do it in Light Table. But if you can do it in JS you can do it in Light Table. JS is quickly becoming the one language to write. JS is found almost everywhere nowadays. So I think Chris has made a smart decision when he decided to write it in ClojureScript.

The main point is, it's got (almost) no limit on what can be added and no limit on what can be removed, something Emacs is missing.

Oh and one more thing. Emacs is millions of lines of Elisp plus the C code it uses. Light Table is only around twelve thousand lines of ClojureScript, at the time of writing (v0.5.18).

So yeah. My point is, it'll possibly be the text editor once a couple of years pass after it's full release and it is more mature.

]]>
Open source games without going poor https://greduan.com/blog/2013/10/23/osgwgp Wed, 23 Oct 2013 00:00:00 +0000 2013-10-23-osgwgp Notice from 2015: This was just an idea I had, if you implement it cool, but I myself will probably not be implementing this idea. Nowadays I have a much different outlook on the subject.

TL;DR: Release your game for a fee, after a certain amount of time release source code, continue selling pre-compiled version to public. Open source games.

So what's the idea? What do I mean with open source games?

I love open source. I try to make everything I use open source whenever possible. There are lot of great projects I use and sometimes contribute to that are open source. For example Clojure, Vim, Chromium (all of which I use) or DocPad (to which I contribute).

I also love games, I mean who doesn't? Doesn't have to be video games for it to be a game, there's also physical games like Tag, Hide-and-Seek etc. They are all free, open source, they're just an idea, you see kids modifying games in order to make them more interesting all the time! Most video games aren't open source however. You pay for a game, you play it, finish it (or complete it), move on to the next game. The games that are open source are so simple there isn't any gain in making them not open source, if anything you lose from making them closed source.

A lot of, if not all developers love open source. However, a lot of us developers make games for a living. We can't make them open source, we would have to live off of donations, which are inconsistent if there are donations and if there aren't well... we're in deep shit if we don't have a job.

Now, this is an idea I had that tries to solve that. Us open source lovers who want to make a decent living off of making open source games will be able to do this with this idea I had.

The idea is simple, release the game to the public for a fee as usual, however after an agreed amount of time, which you decide, release the game's source. Make it open source. However, continue selling the pre-compiled game to the public.

So it's really a simple idea. You release the game for a fee for a certain amount of time in order to make a living or at least make up for what you lost in the development process. After that certain amount of time is up you release the source.

This makes the game open source however you are still making an income from non-techie users or just nice users.

So why would this work? Smart users would just wait a year or whatever and compile the source and done, they won't pay anything, why would they? They're smart about it. This idea is just a recipe for disaster!

Well, the way I see it, if your game is good enough, people won't care about paying for it. They won't mind keeping you alive by giving you $5 or whatever your game costs. In fact they would be glad that you made that game. AND, as an added bonus, if they're interested in the game's source code to see how you figured out this holy-crap-that-looks-crazy thing in your game, they can do that.

Most probably if the don't pay for your game it's because they want some sort of trial period or they can't even afford the game anyway. So really you're not losing much.

Of course, there'll be the users that just compile the game, play it, finish and complete it and uninstall it, never touching it again. How many users are like that? In my experience with people, not many.

So this is a trust issue. Do you trust your users enough to be nice enough to let you live? Or not? I trust them, they're playing my awesome game, why wouldn't they want to keep me alive for me to make more games?

You can look at it the other way, maybe people are interested in your game not for the game itself, but the code. They approach your game's code because you figured something out that they've had a hard time figuring out. They figure out stuff for you that you did wrong and make pull requests.

So you have an income one way or another. Be it through code or money. If they help with the code they help with the overall game so the sales would be better, for example.

This is just a crazy but not impossible idea I had while thinking on my bed, so it's not really fleshed out, since there isn't anything to flesh out.

Here is some further reading you may be interested in:

]]>