New Phone and Service

From my previous post on tidying, one of the ways I was going to "tidy my finances" was to switch phone providers.

My coworker has been on Google Fi for a few years, and he has always talked about how he's used so little data, and his monthly payment was in the 30-50 dollar range every month. While jealous of his payment, I always came back with the fact that I get unlimited data. Unlimited Data!! For less than $120 a month. 120 dollars!

The company advertised it as you get unlimited data and calling for $70 a month though. However, I was on a program where you can upgrade to a phone any time you want, up to three times a year. The phone leasing and various taxes and a phone protection plan apparently cost me another $50 a month. However, I don't remember ever asking for the phone protection. I leave the old company a little bit mad at them because of that, a fact that I only discovered when I was switching, and the fact that I couldn't turn my phone in and kill me lease, they would charge me for the remainder of the lease? WTF?!?  I still have the phone as the store said the main HQ will bill me later. At that point I plan on protesting it.

I'm now 10 days in with Google Fi and a new shiny Google Pixel 3 XL. Over those ten days I've used 150 MB of data. Over that same time I've used 15 GB over WiFi. I always thought I'd be crippled without an unlimited data plan, or 6 GB or whatever, but it's actually pretty easy.

Data Saver

Google phones offer a data saver. I thought it was new to my new phone because I never checked, never needed it. But it's there, and you only care if you're not on an unlimited data plan that is over priced. The data saver is an amazing piece of software! It will turn off background data for all apps, except for two by default, and for any that you choose to allow access to background data (none on my phone ...).  My data usage is at an all time low.

To get the data so low, though, you really need to make some sacrafices. No web browsing. No YouTube. No streaming music. These can be alleviated by downloading things over WiFi. I've downloaded a few albums. I download some podcasts before I leave for work or for the office. I have plenty of material to keep me entertained for many drives back and forth to work.

It will get tricky once I need to go somewhere I'm not familiar with, and use the Google Maps app. You can download your area to maps instead of streaming that data, but traffic needs to be real time!  But this isn't a data elimination, just a diet. I might go up to 1-2 GB, and that's fine, as my bill will still be less than $50. $68 less than what I used to pay.

Phone

For the phone, like I said I was leasing phones with the other company. To get rid of leases, I bought the phone on my credit card. $800 total! Yes, it's steep, but I will pay it off quickly. The way Google Fi works, it'll only ever charge you for 6 GB of data if you go over. With $20 base and $60 for data at the most expensive payment (add some taxes) for the Fi a month, I'll still pay $20+ less than the old plan. If I did that and used maximum data every month, the phone and plan switch would pay for itself in 40 months. If I use 1-2 GB a month, figuring my monthly payment is $68 or so less than my old plan, the switch pays for itself in 12 months. If I shut off data completely, I'm $93 cheaper, the switch pays for itself in 10 months.  Figure I'll own this phone for a few years, at which point I can trade it in and get the next faster phone cheaper. This is big time savings and I was stupid for not doing it a while ago.

As long as Google Fi exists, I won't be on a major carrier. It's plain dumb.

Tidying

Over the past few weeks, I've been on a tear with cleaning up my place. I watched the Marie Kondo Netflix series, and I was kind of inspired by it. I decided to try it on my clothes. So I went through everything, got rid of stuff that didnt' "spark joy" (more on this later), and folded everything into tiny little squares!  At the end of my process, I had 2 empty drawers, where they were difficult to shut before they had so much in them! And the folding everything made everything just fit much better.

Inspired by the space saving produced by my clothes experiment, I decided to do the rest of my rooms in the upstairs part of my house. The collection of stuff elsewhere up there wasn't so drastic, so I moved on to the downstairs.

The living room and dining room are particularly cluttered. I have bikes, toys for Genevieve, loads of board games since that's what we like to play, tons of books, various things, video games, pictures, a shovel in case it snows, vacuum, the lone closet on the first floor is packed to the point of overflowing. During my initial flame of inspiration to declutter, I didn't make much progress on the first floor because there was just nowhere to put anything. Because my basement...

My basement is a nightmare!  When I moved back in to my house 4 years ago, everything I brought with me just made it into a pile down there. As I needed things I would bring them up. When I didn't need things, or had loads of boxes from online purchases, they made their way downstairs. I was always afraid to throw things out because I'm not sure of the impact on the environment, if the trash truck would even take them, and probably because I'd always forget.

So yesterday, March 2, I worked all day to clear it up. I threw out a lot of stuff. I found my PS3 during the day and noticed that all the games I have are in mint condition, and they were all in their boxes! I went and sold them, trying to sell my PS3 as well, but I was missing a wire. I ordered it so I should be able to sell it Wednesday. (Hopefully the sale nets me more than $10 :)

I ordered Sterilite boxes to store stuff in, as well as a metal rack. I already had a 4 shelf rack down there which quickly became consumed by bedding and Christmas and Easter decorations. I got a 5 shelf one the other day. The Sterilite boxes I got were four (for lack of a better term) big ones, and 6 (again) small ones. I'm sure they have liter measurements though.  The whole process is nearly complete, and when I'm done here, I'm going to package the rest of the loose ends down there, throw away the remainder of the boxes, figure out what I can sell, and then it will be done!  And I will have room for the rest of the stuff on the first floor, to be packaged up or determined to be sold or trashed. It's a liberating experience. I feel I'm reclaiming my house!

Financial Tidying

At the same time... I'm doing what can only be described as financial tidying. In the same way that I'm making room in my basement for stuff upstairs, I'm making room in my monthly expenses for a new purchase that I'll be making. I mentioned this earlier in my Hybrid Car Shopping post. Some things I'm doing there are

  1. Switching my phone provider to Google Fi. The initial cost is the phone which was $800 (I plan on paying it off quickly), but the monthly cost will be under my control, between as low as $20 and as high as $80. Before taxes I think. However, my current phone bill is $120. The variable pricing with Google Fi is how much data you use, at $10 per GB. So if I can try to limit that, and I typically only use 3GB a month WITHOUT moderation, my bill will probably be $50-60 cheaper, and it'll have paid for the new phone in a year.
  2. Paying off credit cards. These I didn't really used to count as monthly payments for some reason. That was a mistake! I was making a little more than the minimum payment each month, which was something like $200 a month across 2 cards. The debt is not outrageous but if I can get rid of them quickly, I can gain back $200 a month right there! Easy!!
  3. Paying off my lawyer.  However, this is zero interest (so far!) and not of the highest priority. However at the rate I'm going it'll be paid off in a year. He's likely to do more work for me though, but that's ok, and welcomed since he provides a valuable service :)
  4. At some point I'm going to learn Google Cloud stuff and have my websites be functions in the future, and not need a virtual Linux server like this site is running on. I've already moved over all of my private Git repositories to Github since they offer unlimited private repos for free!  FREE!

Those are the things I can do right now. I have no interest in paying extra principle (see what I did there?) on my house since I was going to try to move but now I don't feel I need to. If I can get my house into tip-top shape and be happy living here. There are a few things that I want to do in that regard.

  1. New furniture. This includes a new couch and new mattress. These will likely not happen until I get my monthly payments down to a minimum, namely paying off the credit cards.
  2. New refridgerator. It's just old and the lights don't work, etc. However, this will be very low priority, as I might go in there and reorganize to start feeling better about it.
  3. Fix plumbing. I have a leak in my bathroom piping. This is high priority.
  4. Fix my garage door. This is medium priority.
  5. Nest thermostat. This could also help with my monthly bills.

In terms of the Marie Kondo method, and the "sparking joy" concept of it. It's definitely not only about joy but also function. I'd definitely not throw away a pair of sweatpants because they don't spark joy, but keep them around for the few times I actually feel like lounging around in them. It's hard to assess for me, the joy that is caused by material things, since I'm not a material girl, or even a girl for that matter. I do have a lot of stuff but that's just because I have interests. I have about 50 piano books. Old cameras. Video games and consoles (like an Atari 7800 and a Sega Genesis). Tons of guitar stuff. Computer stuff. My hobbies spark joy. However, the space these things take up while not organized and put away neatly sparked the opposite of joy. And that's what I'm addressing :D

On the financial side of things, credit card debt does not spark joy! :P  Debt in general can kiss my butt. Monthly payments do not spark joy. A new shiny car with sweet tech and high gas mileage though... In order for me to fit that in I had to "make room in the basement". The difference between the financial tidying and the house tidying, is that I am not making room in my house for new stuff. Just making room for breathing, and for joy.

I often think... did watching Marie Kondo light this fire? Or was it the timing of me like, coming to a point in my life where I feel cluttered and overwhelmed with loads of crap, a bit of debt (again, not ridiculous), wanting to introduce something new in my life?  Like a car? Or perhaps start dating?  Or is it all just Marie Kondo and her contagious joy?  I think it was Marie Kondo :)

Thanks for reading! It's a journey.

SCGen Update

With some client Sitecore work coming up, I've had to think about how to get rid of TDS for that specific client. I haven't had to do much Sitecore template work on that project at all, so i've been able to exclude TDS projects from the solution pretty much from the beginning.
 
However, the TDS code generation within that project created a Model.cs file that is nearly 53,000 lines. 2.4 MB!! It is pretty monumentally important that scgen can generate pretty much the same code, but without the enormous overhead of TDS. However, not the same code, as much of the code in that Model.cs is repeated, like full repeated "using" statements for every single type... (ugh).  I can probably get it down to half the size, or even better!
 
The TDS code generator was generating things that weren't covered by scgen, like field wrapper properties along with field wrapper value getters.  It was generating solr index attributes. Index field attributes would use the name of the property but as all lowercase and underscore separated, as well as the C# style property name. 
 
The project Model.cs needed a lot of new things that just isn't covered by scgen, and I didn't want to add dedicated properties to scgen that only this project needs.
 
So, my solution...
 
Long story short, Go allows you to deserialize json to a concrete structure much like Newtonsoft.Json, and it allows a field type to be json.RawMessage. So the FieldType type looks like this now
type FieldType struct {
	TypeName      string          `json:"typeName"`
	CodeType      string          `json:"codeType"`
	Suffix        string          `json:"suffix"`
	PropertiesRaw json.RawMessage `json:"properties"`
	Properties    FieldTypePropertyMap
}

type FieldTypePropertyMap map[string]string

To define these properties, you can just add a "properties" json property to the field type. Here's an example for the "checkbox" field type:

        { "typeName": "Checkbox", "codeType": "bool", "suffix": "", "properties": {"SpecialProperty": "SpecialPropertyValue"} },

The process for including this and processing the text template is pretty verbose. It's not so bad if you can always depend on that property being there. But first, if you need to check for the property, you can use Go's Text Template "index" method, like so:

{{ if ne (index $field.Properties "propertyName") "" }}   The property exists, use it here.  {{ end }}

If the map doesn't include that property, it will just return a blank string. And to write it out in the template, simply do this, again with the index method:

{{ index $field.Properties "propertyName" }}

Feel free to browse the code at github.com/jasontconnell/scgen.  There was also an update to the configuration helper that I use for nearly every project I create in Go, located at github.com/jasontconnell/conf.

Hybrid Car Shopping

Yes, I'm in the market for a new car. My car might last a few more years, it only has 70,000 miles on it.

I've been saying for the past few years, the next new car I buy would be a Tesla. However, there are many factors leading me away from that. First, they're freakin' expensive. Second, I would have to install something at my house to get the Level II charging which takes the charge time from something like days to overnight. Third, I'm not sure right now is the right time.

So instead of Tesla, I've been looking at 2019 Hybrid models, like the Kia Niro, Hyundai Ioniq, and the Honda Insight. They all seem like good cars. However, I want a hatchback, since I currently drive a hatchback, so the Insight loses a point, and I want 55 mpg. So the final choice would be the Ioniq. The choice between the Insight and the Ioniq will be left up to test drives of each, but I hope the Ioniq is good enough, because I want that car :)

So, I was doing some math on the choice to go hybrid. The one constant drive in my life is to and from work. The trip is 14 miles round trip. I've measured the MPG for my car, the 2011 Hyundai Elantra Touring. It has a 14 gallon tank. It has been getting 20 mpg for the trip, which is highly reminiscent of city driving. The Ioniq and Insight both advertise 55mpg city.

14 miles per day, figure in 4 weeks of vacation (not figuring in working from home some days). Figure 240 working days a year where I'm making the 14 mile round trip, so 3360 miles. Just for work.

At my current car rate, I will use 168 gallons of fuel per year. Again, this is just counting as if all I ever used my car for was back and forth to work, nothing else. With a gas tank of 14 gallons, this would require a fillup 12 times a year.

At 55mpg city, I will use 61 gallons of fuel, requiring a fill up of (on the Ioniq's 11.9 gallon tank) 5 times per year.

Gas prices change so I will just measure in fillups per year. I've owned my car since May or June of 2012, so 6 and a half years. I bought it with 8K miles on it. So I've put on 62,000 miles. I don't know how many times I've filled up, but I would wager that I've only averaged maybe 22-25 miles per gallon. We'll be generous and call it 25.

By that number I've used 2,480 gallons of gas, or have filled up 177 times. By the same token, if I got the Ioniq, and it kept close to advertised mpg (or, let's be less generous and say it *only* ends up getting 45 mpg), after 62,000 miles, I would have used 1,377 gallons of gas, and would have filled up 115 times. Here the fill up number is misleading, at that scale a 2.1 gallon difference, or 15% smaller tank, makes a big difference, I would use over 1000 gallons less.

At $2.50 current gas price, I would save $2,500 over the 6.5 years I've owned the car.

So that's not a lot of money. However, I'm still going to get it!  I need a new car. I joked with some people, the 2011 Elantra Touring is falling apart around me, in a few years I'll just be driving a frame around :)  The car works, it gets me places, however there's nothing cool about it. Ok, it's got roof racks. I'll miss those. If I really wanted to keep them I would seriously consider the Kia Niro. But the Ioniq has a .24 coefficient of drag. 55 city, 54 highway! That's pretty awesome. It pretty much matches up in tech against the Insight, however, the Insight has that sweet electronic parking brake, while the Ioniq has just the foot brake. My Elantra even has a hand brake!  Oh well. 

The Mind of a 20+ Year Developer

I'm not sure how long I've been coding. Let me see... I started with GWBasic in my senior year of high school, so approximately Fall of 1996, so 22 years fully, but I didn't know jack then. I don't know jack still, so by that measure, I've not been developing at all :)

But computer science started in college in Spring of 1998, so it's been over 20 years since I actually started to learn some things. Those were exciting times! Anyway...

As a developer of 20 years, really into learning everything I could, having dealt with many different languages and coding paradigms, frameworks, ideas, projects, protocols, etc. at some point, a lot of solutions started to seem obvious to me. For a recent example, I've been working on my Go Sitecore API. Sitecore is a hierarchical system for storing content. Content is a broad term in the Sitecore world. As they say, "Sitecore is built on Sitecore". It stores all of your data structures (templates), content, ways to render the content (layout and renderings for all intents and purposes), system data, etc. Everything is a node. A node is an instance of a "Template".  The "Template" node is a node. Etc. And it's all tree based.

So in writing the Sitecore API, you don't necessarily want to deal with the entire tree. Also being a tree, the nodes have a "Path" property, like /sitecore/templates/User Defined/My Template. I wrote a simple way to build the tree from the database, then filter out the tree by a set of paths ( paths []string). This would simply go through the tree and the result would be a map of Guid -> Item (a node), where the nodes returned would all reside within the path specified. You could provide a path like "-/sitecore/system" (beginning with a hyphen) to say that you want to exclude those items. That code is here. So I found myself needing the opposite.

Give me all nodes in these paths, then give me all nodes NOT in these paths. You could write a set operation, an XOR or something like that. But I needed to do it by path. Knowing I had the path operations like "-/sitecore" (starting with hyphen) to exclude items, I quickly said to myself, "why not use the same paths, and where it starts with -, remove it, and if it doesn't start with -, then prepend it, and use those paths?"  So that's what I did. You can see that code here.

Of course, now I'm thinking the XOR operation might be a better idea! Give me all of the nodes in those paths, then loop through the base tree and add any nodes where the ID is not in the resulting filtered tree... that might be a little bit better, I think... although it does result in two loops through the entire tree contents, my original idea may actually be the better one.

So you can see how the mind of a 20 year developer works. Also I'm not afraid to say "Oh, yeah, that can be done much better, I'm going to rewrite it."

An Uncanny Memory

Another thing that I've noticed is that I know how pretty much everything I wrote works. For every project, if someone asked me, "Hey, there's a component in this project you worked on 5 years ago, I have to make an update to it, can you tell me how you wrote it?"  "Sure! It works like this" and then I'd spout off all the things it does. And any thing to watch out for. Sometimes I'm amazed at how I have retained that information through all I've worked on in the past 20 years.

I might benchmark those two methods and see which one is faster. Happy Coding!

SQL Server, A Million Updates, Multithreading and Queues

In the past month or two, with work, I've had two projects that have involved massive updates of data. Pulling data from a source, processing it, and updating SQL Server in both instances coincidentally. I've learned a lot.

First, SQL Server does not respond well to multiple threads doing thousands of updates each. I did not know this. I've seen the error message, "Transaction (Process ID XX) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction." more times than I'd like to admit. I've done multiple threads doing SQL updates many times before, but I guess never with tens of thousands of rows.

I wrote two apps that process hundreds of thousands of rows of data each. One was written in C#, the other was in Python. I'm not quite as adept in Python but I've learned some tricks.

The approach I've taken for each language was almost similar. Both involved creating a shared queue that would hold all of the SQL statements that need to run. The SQL statements are just stored procedure calls. There would be one process that just goes through the queue, batches them into 15-30 statement chunks, then executes it.

The two solutions, Python and C#, were slightly different though. In Python, the multiple threads would add to the queue, then after all the threads were done processing, it would process all of them. The C# solution involved creating an object which was a singleton (per connection string), and held the queue, and it would contain its own thread which would constantly process the queue. But just one thread so it wasn't overwhelming SQL Server in any way. Here's a little bit of code. In each language, I used the built in Queue provided by their respective standard library, although in C# I used the ConcurrentQueue.

C# pseudo code



        multiple threads collecting data
        {
              call service
              add all data to be updated to the service
        }

        service data collection
        {
              get sql for object (orm)
              add to the shared queue
        }


        sql execution thread - run
		{
			while (true)
			{
				open connection
					while queue has stuff
					{
						create a string builder, add 15 or so commands to the batch, separated by ;

						build n database command and execute it.
					}
				}

				close connection.
				sleep for a second.
			}
		}

Python pseudo code


        multiple threads collecting data
        {
             download data (in this case, it's just download csvs.
             process data
             get sql statement
             add sql statement to the shared queue
        }

        main thread
        {
             collect all data (fill the queue) across multiple threads
             process the queue, calling each batch of 35 in this case, in a single thread
        }

So as you can see, the C# version is processing the queue as the data is being collected, and Python waits until the end and then processes the queue. I think the C# approach is better, as I said I'm a little bit more adept with C# so I'm more comfortable doing things like that there. Hopefully this helps someone out there with processing loads of data.

Of course, in Go I would have just used a channel! Just kidding, there'd be the same amount of complexity in Go, but definitely the end result would be a lot better looking!

Happy Coding!

CRUDGEON

I thought that was a word when I first wrote it. But I was thinking about other words like curmudgeon or something. Anway...

CRUDGEON

A set of big projects came along in work that consisted of some of the same pieces, in a high level of thinking. Get data from a service, store it in a database, generate html based on the data. I guess it doesn't matter that HTML is generated. Generate formatted output based on the data. That's better.

The services are not consistent in their details. One was a WSDL Web Service, one is a JSON service, and two are just schemaless XML. This part was pretty annoying. Schemaless XML and JSON need to go away. We are in 2018, the dynamic typing experiment is over ;)  (that's sure to ruffle some feathers).

When looking over the data that was coming back, 2 responses returned types that have 130+ fields in them. This would have to be represented in a SQL Table, in stored procedures, in a C# class and interface. Looking over 130+ fields, I immediately thought there's no way I'm typing all of that by hand.

A really lazy person (like me) would probably try to use a text editor with regexp find/replace functionality, copy a list of the fields in and run regexp find/replace to format it in the way that I would need it at that given moment. Like, in a list of parameters to a stored procedure, or as a list of parameters to a constructor, or generating properties on a POCO (plain old C# object). I am definitely lazy, but I'm also too lazy to do that each time.

A championship caliber lazy person (like me) would probably write CRUDGEON.  I also don't know why I keep doing it in all caps. "crudgeon" is equally acceptable for those who's caps lock and shift key are just too far away.

So what is it?

Basically, you give it a fake C# list of properties, and it'll generate whatever you need it to. Right now it'll generate:

  1. Database Table
  2. Stored procedures for Get, Update, and Delete
  3. C# objects with appropriate attributes for pulling data from services, like XML attributes, JSON "DataMember" and "DataContract" attributes, and so on.
  4. A convenience "map" script which does the copy from a service object to your POCO, in the case of WSDL objects where you don't want to have the WSDL generated type available to everyone, and hence depended on by anything except what you control (I always do this by the way... never expose WSDL types, they should only be internal. But I digress).

The README.md has a lot of info on using crudgeon. It also has example inputs and outputs within the git repository itself. I wrote it for these specific projects in mind, and the generated C# code has references to things I wrote specifically for these projects, but if I come across any other project that needs to courier data and store it locally, I will definitely be broking open the ol' VS Code again. I wrote sqlrun in conjunction with crudgeon because I needed a quick way to run all those SQL files it was generating. I've used it hundreds of times in the week since. After testing, I'd find that I'd need a new attribute, or a column needs to allow null, or something. And I'd regenerate all of the code, run sqlrun at the SQL that was generated, and begin importing the code again, all withing like 10 seconds of each other.

Maybe you'll find some use for it. I know I definitely will. Like I said, it was definitely written with these sets of projects in mind, but with little modification, maybe they can be used more broadly. Or maybe with no modification! I'll know later when I find an excuse to use it again :)

Happy Coding!

Grunt Work Principle

One word that I've used to describe my work style, but not really my programming style, is "lazy". This can be a word that describes behavior that is often considered detrimental. Non-flattering.

However, in the programming world, it is a very good trait, if employed with other proper attributes. Like, being a bad programmer and lazy isn't good. But being a decent programmer with good work ethic and lazy is actually pretty good!

Lazy definitely doesn't describe my work ethic. Lazy describes how I am when confronted with grunt work. I have been trying to describe it with a principle or some other short definition. Out of pure laziness and lack of creativity in general, I'll call it the "Grunt Work Principle". You can feel free to put my name in front of that. I couldn't be bothered to.

Grunt Work Principle

If the amount of grunt work presented exceeds 1 minute, and the grunt work can be automated, no matter how long the process to automate, it will be automated.

In practice, this will take the form from anything like my last post on Sitecore Doc, to something simple like taking 100 lines of short text and compacting them to 100 characters per line. For that compacttext project motivation in particular, each time I would have to do it would not exceed 1 minute probably, but easily add up all the times I've had to do it and it's in the 3-4 minute range :)  That project probably took me 1 hour total to create but it can be used indefinitely.

There is no upper limit on the amount of time automating will take. There is only the restriction of whether it can be automated. For instance if it requires human interaction or things that aren't so easily determined by a computer alone. Even then I'd probably find a way to automate as much as I can. For instance, with the Sitecore Doc project, I could automate getting items and renderings from Sitecore, generating output, but at the time (and I plan to integrate source-d into my workflow) I could not easily figure out a way to map method calls to renderings. So that part I had to do manually, which was a few hours worth of grunt work. Oh believe me, tracking calls to methods is grunt work when all you are doing is producing documentation!!

Benefits of Automation

Future re-use: The top reason to always automate the current task at hand is for future re-use. You may initially very specifically automate for the task at hand, but then in the future find a way that you can re-use that but with small modifications. Or even a complete rewrite. Or completely as is. This is all fine.

Consistency: Automating will produce consistent results. In my compacttext example, the output is predictable. If I specify 100 line length in that example, the same input will produce the same output 100% of the time. If a human were doing it, there's no guarantee as eyeballing the line length can skewed by things like screen size, font size, caffeine consumed, etc.

It is usually too soon to optimize, but wtf are you waiting for with automation?!  Get to it!!

Sitecore Doc

Sitecore solutions can become large and unweildy. Recently I was tasked with the following: Find out which page each of these service methods (40-50) are called. With how .NET and Sitecore applications (generally all good applications) are written, a service method call would be written within a component, but that component could be put on any page!

Luckily these components manifest themselves in Sitecore as "renderings". They can have data sources and parameters. And code, which is what leads us to this mess in the first place ;)

First we'd need a way to map these renderings to service calls. I came up with a generic "info" data field to do that in a JSON file which defines all of the renderings we're interested in. On a side note, I only provide those that we're interested in, this current project would yield a 4-5 MB result file which would be ridiculous if it included everything. That JSON looks like this:

 

{
    "includeUndefined": false,
    "renderings": [
        {
            "id": "6A8BB729-E186-45E7-A72E-E752FDEC2F48",
            "name": "AccountBalancePurgeSublayout",
            "info": [
                "CustomerInformationService.GetCardStatusV4(authHeader, accountId)",
                "CustomerInformationService.GetPurgeInfo(authHeader, accountID)"
            ]
       }
}

Using my (recently updated to accommodate this request) Go Sitecore API, I was able to take that information and map it against every page renderings (or standard values renderings) and produce a file that is filled with every page and their (eventual) calls into service methods. These aren't directly called within the page code (usually), and there's heavy caching going on as well. Here's what the output looks like:

 

    Name:     booking-calendar
    ID:       f1837270-6aca-4115-94bc-08d1a4ed43ad
    Path:     /sitecore/content/REDACTED/booking-calendar
    Url:      https://www.REDACTED.com/booking-calendar
    Renderings:
            Default
                CalendarBooking   
                    Path:         /layouts/REDACTED2013/REDACTED/SubLayouts/Booking/Calendar/CalendarBooking.ascx
                    Placeholder:  content 
                    Info:
                                  ReservationService.GetAllRoomTypesV2()
                                  ReservationService.GetCashCalendarV3(GetAuthHeader(),promoCode,startDate,endDate,isHearingAccess,isMobilityAccess, isWeb)
                                  ReservationService.GetCashCalendarWithArrivalV3(GetAuthHeader(), promoCode, roomType, arrivalDt, numNights, isWeb)
            Mobile
                CalendarBookingMobile   
                    Path:         /layouts/REDACTED2013/REDACTEDMobile/SubLayouts/Booking/Calendar/CalendarBookingMobile.ascx
                    Placeholder:  content 
                    Info:
                                  ReservationService.GetAllRoomTypesV2()
                                  ReservationService.GetCashCalendarV3(GetAuthHeader(),promoCode,startDate,endDate,isHearingAccess,isMobilityAccess, isWeb)
                                  ReservationService.GetCashCalendarWithArrivalV3(GetAuthHeader(), promoCode, roomType, arrivalDt, numNights, isWeb)

This was very useful for this specific task, however it's written in a way that will be very useful going forward, to provide insights into our Sitecore implementations and how the content is structured.

This app will see updates (sorry the code isn't available for now) so that it will show usages among different renderings, unused ones or broken (exists in a renderings field but not as an actual item in sitecore [was deleted or not imported]), and other stuff that I can think of. This binary is named "scdoc" as I like to keep my names short :)  The Sitecore Code Generation tool I wrote is simply "scgen".

Check out that Go Sitecore API though if you want to easily query your Sitecore database!  Happy Coding :)

Goals for the Summer

Goals for the summer... I've written a few of these in my life, it'd be nice to not have to do them again.

#1 - Code generator that generates code generators.
#2 - ORM which, based on inputs, will map the appropriate choice of ORM to my current needs. An ORM Mapper.

https://twitter.com/jasontconnell/status/989965266141569025