Gulp Coffee

compiling coffee script with gulp

"i can just do gulp coffee it looks like.
which is what i do all day anyway."

https://twitter.com/jasontconnell/status/986275775916773376

Philadelphia Sports 2018

I just need the Flyers, Phillies, Villanova, and the Sixers to win Championships this year then I'll be satisfied.

https://twitter.com/jasontconnell/status/980961145271005184

Too bad the Flyers are out of it though.

Brain Bending Stuff

I work on some pretty brain bending stuff, but today I was amazed when I went downstairs and when I got down there I remembered what I went down for :)

https://twitter.com/jasontconnell/status/979879776151318528

Flurry of Posts

I will be making a series of posts. Basically, I find the stuff I write on twitter is hilarious :D And it's only available on twitter.

Also, I was working on a guitar project. I am a huge fan of The Kinks. So I recorded myself playing guitar along with the entire Kinks - We Are The Village Green Preservation Society album. It's my favorite. The project was: Record myself playing guitar, figure out the songs that same day (usually, I did know a few already), it would have to be recorded on my cell phone, and mistakes are allowed!

So generally, the quality is bad and the playing is ok. The music is still great though. Here it is, the Village Green Office Sessions

For twitter posts, I'd probably like to write something that scans twitter for my posts tagged with a certain tag and import them automatically as posts. But I'm in a transition period on this site. Basically I want to learn Google's Cloud suite of tools and rebuild it there. So updates will have to wait.

Google App Engine on Windows

For the life of me, the Google App Engine "Quick Start" wasn't working for me for starting up the development server in Go. The command listed as "dev_appserver.py app.yaml" is the one I'm referring to. Windows kept asking me which program I wanted to use to run it.

Trying to avoid installing another copy of python, since it comes with the Google Cloud SDK, and I knew the "gcloud" command used python, I decided to look for that gcloud.cmd. And I found it. And I copied it. Only to run the dev_appserver.py instead of gcloud.py :)

Here it is. Just put it alongside the dev_appserver.py and it'll run fine.

 


@echo off

SETLOCAL

SET "CLOUDSDK_ROOT_DIR=%~dp0.."
SET "PATH=%CLOUDSDK_ROOT_DIR%\bin\sdk;%PATH%"

SETLOCAL EnableDelayedExpansion

IF "%CLOUDSDK_PYTHON%"=="" (
  SET BUNDLED_PYTHON=!CLOUDSDK_ROOT_DIR!\platform\bundledpython\python.exe
  IF EXIST !BUNDLED_PYTHON! (
    SET CLOUDSDK_PYTHON=!BUNDLED_PYTHON!
  ) ELSE (
    SET CLOUDSDK_PYTHON=python.exe
  )
)
IF "%CLOUDSDK_PYTHON_SITEPACKAGES%" == "" (
  IF "!VIRTUAL_ENV!" == "" (
    SET CLOUDSDK_PYTHON_SITEPACKAGES=
  ) ELSE (
    SET CLOUDSDK_PYTHON_SITEPACKAGES=1
  )
)
SET CLOUDSDK_PYTHON_ARGS_NO_S=!CLOUDSDK_PYTHON_ARGS:-S=!
IF "%CLOUDSDK_PYTHON_SITEPACKAGES%" == "" (
  IF "!CLOUDSDK_PYTHON_ARGS!" == "" (
    SET CLOUDSDK_PYTHON_ARGS=-S
  ) ELSE (
    SET CLOUDSDK_PYTHON_ARGS=!CLOUDSDK_PYTHON_ARGS_NO_S! -S
  )
) ELSE IF "!CLOUDSDK_PYTHON_ARGS!" == "" (
  SET CLOUDSDK_PYTHON_ARGS=
) ELSE (
  SET CLOUDSDK_PYTHON_ARGS=!CLOUDSDK_PYTHON_ARGS_NO_S!
)


SETLOCAL DisableDelayedExpansion


"%COMSPEC%" /C ""%CLOUDSDK_PYTHON%" %CLOUDSDK_PYTHON_ARGS% "%~dp0\dev_appserver.py"" %*

On guns

Going on facebook or something after a tragedy like what happened in Florida, it's such a waste of time. You really see the worst of both sides. I'm not for or against guns, I see the argument on both sides (I recently did a personality test and I'm firmly INTP which is nicknamed "Logician". I think and perceive and I don't let emotions really lead to decisions I make.  So I'm also not very opinionated.)

My thoughts are, there has always been crazy, there hasn't always been guns. In order for someone to kill 17 people before guns, before assault rifles or high capacity magazines, they'd have to either be a serial killer or in some kind of armed forces, a king. Or the leader of a cult. I'm simplifying. Basically, a single person couldn't weild something that gave them so much power that they didn't spend years trying to earn. This gives so much power to anyone who manages to stay sane for a little while to pass background checks, saves up a couple hundred bucks, and buys a gun. This is a desire for power with the classic means to gain it largely nerfed.

That's all.

Advent of Code 2017 - Day 13

Day 13 reveals itself as a sort of lock picking exercise. Part one is a simple tumble (get it) through the series of inputs they give you, to figure out if you would be caught on a certain layer, and if so, do some multiplication to get your answer. Simple.

The puzzle could also be thought of as the scene in The Rock (the movie about Alcatraz with Nicholas Cage and Sean Connery), where, to get into the prison, Sean Connery has memorized the timing for the fires that blaze, and rolls through them unscathed.

Except, the timings way down the line don't match up since they themselves are on their own timers. And there's like 40+ of them.

The sample input looks like this:

0: 3
1: 2
4: 4
6: 4

So, layer 0 has a depth 3. So on "second 0" the scanner is at 0, on second 1 it's at 1, on second 2 it's at 2, on second 3 it goes back to 1, and on second 5 it's back to the start, blocking any would-be attackers.

Layer 1 only has depth 2, so it's fluctuating back and forth between 0 and 1.

Since the puzzle input may include gaps, and it's easier (probably) to complete the puzzle with no gaps, the first step is to fill them in! As usual I'm writing my answers in Go

func fill(scanners []*Scanner) []*Scanner {
	max := scanners[len(scanners)-1].Depth
	for i := 0; i < max; i++ {
		if scanners[i].Depth != i {
			s := &Scanner{Depth: i, Range: 0, Nil: true}
			scanners = append(scanners[:i], append([]*Scanner{s}, scanners[i:]...)...)
		}
	}
	return scanners
}

That line   ------    append(scanners[:i], append([]*Scanner{s}, scanners[i:]...)...)  ---- with all the dots!! What it's doing is pretty simple, though.

If we don't have a scanner at depth i, insert a scanner with depth i at the current position.   "scanners[:i]" is all scanners up to i.  "scanners[i:]" is all scanners after and including i  ( the :   (colon) syntax is very subtle). So we want to insert it between those two. That's all it's doing. The ellipsis confusion is just because "append" takes a variadic list of parameters, and you can convert an array to variadic parameter with the ellipsis. Done!

We'll need a method to move all of the scanners every step. That's pretty straightforward. I called this method "tick". The Scanner is just a struct with Depth, Range, Current, and Dir for telling which direction the thing is moving.

func tick(scanners []*Scanner) {
	for _, s := range scanners {
		if s.Nil {
			continue
		}

		if s.Current == s.Range-1 {
			s.Dir = -1
		} else if s.Current == 0 {
			s.Dir = 1
		}

		s.Current += s.Dir
	}
}

Part 1 was to just send a "packet" (or Sean Connery) through and every time you are at a security scanner (explosion, going back to the movie), multiply the depth times the range at that scanner, and add it to the previous number to get the new number. That part was fine, and you could do it with the physical motion of passing the Sean Connery through :)

So, to figure out part 2, which is "you need to get through without getting caught this time".  Getting caught is simply being at Depth "d" when the scanner at "d"'s current position is 0. You could brute force this.

For brute force, you'd start at delay 0, then try go figure out if you can make it all the way through. If not, move to delay 1 and try again. For each delay, you have to run the tick method. For delaying 100 seconds, tick would have to be run 100 times to get the puzzle into the correct state. So this becomes computationally intense!

This is a fine solution in most instances. In this instance, though, I let it run over lunch and checked in with it 44 minutes later, and it wasn't complete yet! So, back to the drawing board.

But wait!!  Math is a thing. And it's very useful!  I'm actually pretty certain that I don't even need to check the final answer by actually traversing the sequence of layers, it's just the answer. Due to math!

So, to get through safely, the position of a particular depth has to not be 0 when we're passing through it. I wrote a method to figure this out, called "possible". It's pretty much the key to the puzzle, and solving it insanely fast.

func possible(scanners []*Scanner, delay int) bool {
	p := true
	for _, s := range scanners {
		blocking := (s.Range*2 - 2)
		p = p && (s.Nil || ((delay+s.Depth)%blocking != 0))
	}
	return p
}

A "Nil" scanner is a scanner that was filled in due to gaps. This one has 0 range and can't block anything. So if it's one of these, it can pass through this depth at this time.

The (s.Range * 2) - 2.  Call this "blocking" or whatever. I called it blocking since it's at position 0 of its range every "blocking" number of steps. A scanner with range 4 is back to 0 every 6 steps (4 * 2 - 2) To determine if it's possible at this delay, a layer 7 steps in cannot be blocking on delay + 7. Otherwise it gets caught.  (delay + depth) % blocking.   (After delay, scanner at depth "depth" has to not be at the start ( mod blocking == 0) ).  "p" is kept, for each step, if we can pass through up until now and the current layer. You could possibly shave off some iterations here by checking p, and if it's false, break out of the loop. I might actually update it to do that and report back! It takes about 1 second to run currently. (CONFIRMED. it runs in about half a second now).

So, all that's left is to still brute force the numbers to find if it's possible to get through the sequence at the current delay, without getting caught, but you don't have to actually do it, which speeds it up somewhere on the order of a million percent :)

Check out the full solution here - https://github.com/jasontconnell/advent/blob/master/2017/13.go

Happy coding, and happy holidays!!

Some Plays on Words and Phrases

Sometimes hilarious things pop in my head. Actually, I'd wager that sometimes that doesn't happen. Implying it happens most of the time. Here are some new ones (for me) for you potential comedy writers out there. I don't need any public credit :P  Maybe just a comment or email.

"Do I look like somebody who " ... You know this. "Do I look like someone who checks the toilet seat before they sit down?"  Or "Who cares what color their shirt is?"  etc. Here are a few fun ones I came up with.

"Do I look like somebody who looks like somebody?"   Yeah, pretty bad.

"Do I look like somebody who asks to be compared to anybody by their looks?"  A little better.

"Do I look like somebody who looks at things?"  Might be said by a blind person, so after my original inspiration for this, the ingenuity has been lost and it has been discounted back to lacklustre.

That's all I got though. Any more would be forcing it. You might argue the first 3 are as well ;)

Go Dep

As of this moment I've updated all of my Go code to use "dep". It's useful, and it works. It's slow but that'll be fixed. Phew! Slight pain updating all of my repos to it. But man, it'll make life easy going forward.

First of all, I had to get all of my dependencies, code libraries that I've written to be shared, into my public github account. After that was complete, I had to update all of the imports. For instance, instead of "conf", which was a folder directly inside my Go path. Which makes it interesting. For a few reasons.

  1. I had to create a repo (in my private server) for each dependency.
  2. If I didn't clone that repo on another machine, the build would fail.
  3. If I forgot to commit the dependency repo but committed the repo that was depending on it, the build would fail on my other computer

These are problems. Here are a few solutions...

  1. For dependency repos, I may only have them in github going forward. Until it's easy to go get private server repositories. All of my repos are literally on the same server this site is running on.
  2. Doing dep ensure will get all of my dependencies, at the version it was locked at.
  3. Using dep and the dep paths (github.com/jasontconnell/____) willl ensure that the project will only build if it's in github.

You'll see all of my dependency repos within github. There are a few full fledged projects out there as well. (Including scgen :).  It is just a matter of updating the code to use the github url instead of the local file url, running dep init, and waiting :)

One tricky thing is I have a bash script to automatically deploy websites on that server. It looked like this:

#!/bin/bash

cd src/$1
git fetch

deploybranch=`git branch -a | grep deploy`

if [ -z "$deploybranch" ]; then
   echo no branch named deploy. exiting.
   exit
fi

git checkout deploy
git pull
cd ..

echo building $1

go build -o bin/$1 $1

outdir=/var/www/$1

echo $outdir

PID=`pgrep $1`

echo found pid $PID
if [ -n "$PID" ]; then
    echo killing process $PID
    sudo kill  $PID
fi

sudo cp bin/$1 $outdir/

if [ -d "$PWD/$1/content" ]; then
    echo copying content
    sudo cp -R $PWD/$1/content/ $outdir/content
fi

if [ -d "$PWD/$1/site" ]; then
   echo copying site
   sudo cp -R $PWD/$1/site/ $outdir/site
fi

cd $outdir

sudo nohup ./$1 > $1.out 2> $1.err < /dev/null & > /dev/null

echo $1 started with pid $!

exit

I'm very noobish when it comes to shell scripting. Anyway, this will checkout a deploy branch if it exists, pull latest, run go build, kill the current process and then start the process. It'll then copy contents over to the website. Simple build and deploy script.

It is named "deploy.sh". It exists in /home/jason/go and it is run just like this, "./deploy.sh jtccom"  It finds the folder "jtccom" inside of src and does all of the operations there. However, since I'm now using "dep", and none of the files exist within the "vendor" folder (you really shouldn't commit that... dep creates reproducible builds), I will have to modify it to run dep first. This has to happen after the pull. I've included the entire contents of the new deploy.sh here.

#!/bin/bash

cd src/$1
git fetch

deploybranch=`git branch -a | grep deploy`

if [ -z "$deploybranch" ]; then
   echo no branch named deploy. exiting.
   exit
fi

git checkout deploy
git pull

if [ -f Gopkg.toml ]; then
   echo Running dep ensure
   dep=`which dep`
   $dep ensure
fi

cd ..

echo building $1

GOCMD=`which go`
$GOCMD build -o bin/$1 $1

outdir=/var/www/$1

echo $outdir

PID=`pgrep $1`

echo found pid $PID
if [ -n "$PID" ]; then
    echo killing process $PID
    sudo kill  $PID
fi

sudo cp bin/$1 $outdir/

if [ -d "$PWD/$1/content" ]; then
    echo copying content
    sudo cp -R $PWD/$1/content/ $outdir/content
fi

if [ -d "$PWD/$1/site" ]; then
   echo copying site
   sudo cp -R $PWD/$1/site/ $outdir/site
fi

cd $outdir

sudo nohup ./$1 > $1.out 2> $1.err < /dev/null & > /dev/null

echo $1 started with pid $!

exit

I've updated how it calls go and dep, since calling just "go" didn't work anymore for some reason. Command not found. Anyway, here's the output.

[jason@Setzer go]$ ./deploy.sh jtccom
remote: Counting objects: 3, done.
remote: Compressing objects: 100% (3/3), done.
remote: Total 3 (delta 2), reused 0 (delta 0)
Unpacking objects: 100% (3/3), done.
From ssh://myserver/~git/jtccom
   03999eb..7c49dc6  deploy     -> origin/deploy
   03999eb..7c49dc6  develop    -> origin/develop
Already on 'deploy'
Your branch is behind 'origin/deploy' by 1 commit, and can be fast-forwarded.
  (use "git pull" to update your local branch)
Updating 03999eb..7c49dc6
Fast-forward
 Gopkg.lock | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
Running dep ensure
building jtccom
/var/www/jtccom
found pid 29767
killing process 29767
copying content
jtccom started with pid 30260

That Gopkg.lock was updated because I had to update a dependency to also use the github version of the dependency it was importing, since I deleted all dependencies on this server. So that was it. It's very easy to use and will make my life a lot easier, albeit a little bit more tedious. BUT! I really can't complain because the old way of doing things was painful. Forgetting to commit the dependency, now my code doesn't build on my work computer, so I have to wait until I get home :P  Plus everyone can look at the little dumb Go code I use across multiple projects!  Enjoy.

Go and Sitecore Interlude

This is part of the same program I'm developing to generate, serialize and deserialize items, but it's a general helper method that I found very useful, and can be used in any Go program. It can be expanded to be more complete, I pretty much did it for things that I am currently working with. You'll see what I mean.

The application is written as one application that does it all (generation, serialization, deserialization). So, it's looking for configuration settings for all aspects of those pieces of functionality. You generally don't want to serialize the same Sitecore paths that you want to generate code against. However, having the configuration in one file is not what I wanted. Here are the drawbacks.

If the configuration is in one file, you would have to update your configuration file in consecutive runs if you wanted to serialize, git fetch and merge, then deserialize. Your configuration would be committed and would be set for the next person that wants to run the program. You couldn't write bat files to run updates.

You could use the flag package to control the program pieces. Of course. But I set out to have multiple configs. For instance, if you wanted to serialize from a shared database then serialize to your local database. You could also make the config file a flag and set it to different huge files that each only differ by the connection string.

You could.

But then I wouldn't have this cool piece of code :)

Basically, when you run the program, you call it with a "-c" flag which takes a csv list of config files. The program reads them in order and merges them, having configuration values later in the chain overwrite values in the previous versions. I do this using Go's reflect package. As follows:

func Join(destination interface{}, source interface{}) interface{} {
    if source == destination {
        return destination 
    }
    td := reflect.TypeOf(destination)
    ts := reflect.TypeOf(source)

    if td != ts || td.Kind() != reflect.Ptr {
        panic("Can't join different types OR non pointers")
    }

    tdValue := reflect.ValueOf(destination)
    tsValue := reflect.ValueOf(source)


    for i := 0; i < td.Elem().NumField(); i++ {
        fSource := tsValue.Elem().Field(i)
        fDest := tdValue.Elem().Field(i)

        if fDest.CanSet(){
            switch fSource.Kind() {
                case reflect.Int:
                    if fDest.Int() == 0 {
                        fDest.SetInt(fSource.Int())
                    }
                case reflect.Bool: 
                    if fDest.Bool() == false {
                        fDest.SetBool(fSource.Bool())    
                    }
                case reflect.String: 
                    if fDest.String() == "" && fSource.String() != "" {
                        fDest.SetString(fSource.String())
                    }
                case reflect.Slice:
                    fDest.Set(reflect.AppendSlice(fDest, fSource))
                case reflect.Map:
                    if fDest.IsNil(){
                        fDest.Set(reflect.MakeMap(fDest.Type()))
                    }
                    for _, key := range fSource.MapKeys() {
                        fDest.SetMapIndex(key, fSource.MapIndex(key))
                    }
                default:
                    fmt.Println(fSource.Kind())
            }
        } else {
            fmt.Println("Can't set", tdValue.Field(i))
        }
    }

    return destination
}

So, you can see what I mean when I said it can be expanded. I'm only doing strings, bools, ints, slices and maps. The slice handling is different in that it adds values to the current slice. Map handling will add entries or overwrite if the key exists. Strings will only overwrite if the existing string is blank and the source isn't blank. So that's probably different from how I described the code in the beginning :)

Go is very useful. There's like, nothing you can't do :)

So the program is called like this:

scgen -c scgen.json,project.json,serialize.json

scgen.json will have the template ids for "template" and "template field", stuff that's pretty ok if it's hard coded. If sitecore were to change those template IDs, I'm fairly positive there's a lot of existing code out there that will break.

project.json has the connection string, the field type map, serialization path (since it's used for serialization and deserialization), and base paths for serialization.

serialize.json, in this instance, only has { "serialize" : true }  as its entire contents. Files like "generate.json" have "generate": true  as well as the file mode, output path, the Go text template, and template paths to generate.

So these files can be combined in this way to build up an entire configuration. The bools like "serialize" and "generate" are used to control program execution. The settings can be set in separate files, different files can be set and used depending on the environment, like a continuous integration server, or in a project pre-build execution. I foresee this being used with bat files. Create a "generate.bat" file which calls with generate.json in the config paths, etc for each program mode. Or a bat file to serialize, git commit, git pull, and deserialize. Enjoy!