History Lesson

July 5, 2025

Helping a Vietnam Veteran share his photos and stories from the war

View project

Bruce and two other U.S. Marines pose for the camera in front of a series of large military tents

On March 8th, 2024 I was able to launch a side project for my father-in-law Bruce Tester that has been banging around between us for years. We get excited, get distracted, get busy, get back to it and the cycle would repeat. This post is a summary of the design and development of the site as well as the history of all our fits and starts.

The Background

Bruce is an accomplished photographer and with my design background, we’ve spent countless hours geeking out over Photoshop, the latest digital cameras, scanners, printers and all manner of photography related equipment. Many times, I’ve happily ended up being his de facto photographer’s assistant lugging around gear and setting up for a golden hour landscape shot or a time lapse night shoot. Bruce is humble about his photography and despite filling his house with gorgeous prints, he doesn’t really promote his photos. My wife and I have always loved his work and would (gently) push him to display them somewhere, but to no avail. Finally, for one holiday gift, I bought him his own domain, set up hosting on my account and volunteered to set up a site for his photos.

First Attempts

For quick setup and ease of use, I installed WordPress for him and threw a couple of his photos in place using a free photography theme. I tweaked the type a bit, created a quick logo of sorts and posted some photos to show him what was possible. And that’s was where it sat. For years. We both got busy with life and it was always more fun to take photos versus building a photography website.

Getting Serious

Once Bruce retired, he finally had the time to begin digging through his photo archives, retouching and organizing thousands of photos. This is when we came back to the idea of creating a web archive for his photos. The collection that was in many ways, most important to him, were his photos from his time in Vietnam. Since we already had WordPress installed, he began feeding my sample photos and I started setting up test pages.

Of course, like many projects, as we reviewed and discussed the test pages we found the site wasn’t quite accomplishing what he wanted. I had even given him some basic WordPress training, so he could upload photos and create pages, but it just wasn’t working out. He wanted to write more about each photo, write more about his time in the Marines and organize the photos into subsections to better tell his story. In short, the theme wasn’t working out. I searched for other themes and tried several, but the more we talked, the more it dawned on me — we needed to build it from scratch. It would certainly be a ton more work, but we create something unique, personalized and beautiful.

The Design

The decision to finally build the site by hand was in many ways, a huge relief. While I’ve used WordPress for years and years (and still do for this blog), I’ve grown beyond the framework and it’s limitations are now irritations. Starting from a blank page was freeing and more inline with the majority of my recent efforts. Hand crafted instead of framework dependency.

Overall, the design inspiration was really about the time period — the 1960s. And not the free love, hippie, psychedelic day glow lettering 1960s. The other America. The more conservative 1950s generation with buzz cuts and starched shirts. Looking through the command diaries and cruise book Bruce provided really got me thinking about what “official” document design looked like back then. I started researching other government publications.

The primary goal of the site was of course, to focus on the photos. The site hierarchy was a basic thumbnail gallery leading to individual photo pages. Nothing crazy there. Navigation is a simple masthead with a drop down menu along with a breadcrumb link navigation at the bottom of each interior page. The breadcrumb links expose the structure and allow for jump navigation. Within each photo page there is also a previous/next link navigation so you can alternatively progress through the site in a linear fashion.

Framing the photos and lifting them off the page a bit was another design choice to help separate and define the content, directing attention and focus. The site colors were all muted tones: blacks, whites and warm grays — all in service of the UI and not to interfere with the photos.

The typography was chosen to split between the site UI (menu, navigation, etc.) and the content. For the UI, the Inter font family was perfect. It’s got lots of options, it’s very legible and modern, but also not showy. Like the UI colors, it’s not going to distract from the content. For the text content itself, I loved the look and feel of the typewriter style fonts used in the command diaries. Going with a monospace font like Courier New helped anchor the content in it’s original time.

The one page where I did have a bit of fun was on the About page. I was able to grab a few of the military stamps (“Confidential”, “Unclassified”) from the command diaries along with a bit of texture to add to the background. I think these little touches help to keep the page from being too clean and sterile.

The Development

The site is super old school in terms of development — semantic HTML with CSS for style. The only Javascript used is for the main drop down menu and that’s just for the button event to show and hide the menu. Choosing this direction rather than the latest framework flavor was important as the site is a historical document. I always had to keep in mind that i was building for the future. Keeping it simple and using the proven standards that the web was founded on would help ensure that the site would work well into the future. The entire project is essentially a library archive to help tell part of American history. It was a responsibility I did not take lightly.

Accessibility and the Audience

Accessibility is always a requirement, but as we worked through initial pages, it became even more important as we really considered the potential audience. Given the historical content (and using Bruce himself as a tester), we realized we needed to add design elements to cater to an older audience. Adding numbers to each photo in a category, not only helped us in production, but also gives a reference anchor for visitors. The same is true with the text overlay that appears on hover for each thumbnail. The “click to view” instructions help guide visitors to the full size photos. This was a great exercise in recognizing and adjusting our bias — just because we’re click happy youngsters doesn’t mean folks over the age of 70 would understand the implied navigation. Similarly, adding some “how-to” text to the home and about pages helps to make it obvious.

Education and Production

One of the other parts of the development process was education. Bruce has no idea about the mechanics of good website structure, accessibility or SEO and nor should he. He’s the photographer. This lead to some back and forth as I taught him about <h1> tags, why they were important and why we needed one written for every page. All 400 plus pages. So the production process was Bruce writing and reviewing test pages while I focused on batch image formatting, resizing, optimization and even naming for better SEO. The same was true for the HTML side. Sessions of duplicating files, updating the content and links while capturing and handling any unique content that a specific page might have.

One production and accessibility aspect that I also tackled was writing the alt text for the photos. This let Bruce focus on the photos, titles and the overall story. Now, writing descriptions for 400 photos was no small task and at times it did seem overwhelming. After about twenty, you start to burn out and your eyes blur over. Admittedly, some of the descriptions might sound a bit lazy and in a twist, the WebAIM WAVE tool flags some of them as being too long. I do take a bit of pride in providing good alt text, so it’s a warning I’m willing to set aside especially as this is a photography site.

Other production notes include creating open graph images and the site’s favicon. The challenge for the open graph image was to come up with the text to help “package” the entire project. Given Bruce’s other landscape photography work, I could envision adding new sections to the site and moving this entire project to a sub directory. Thinking about that possible future helped me to consider this project a “collection” which then made the open graph designs fairly easily. Designing the favicon, on the other hand, was much more difficult. The site is personal, so it doesn’t have a logo per se. With no logo and a fairly subdued color palette, there wasn’t any material to rely on. I wanted something dramatic, probably because I suffer from too many browser tabs being open at a time. Initial inspiration came from the Marine Corp itself. Namely, the colors. The red and yellow combination is certainly dramatic and jumps out in a browser tab. The second bit of inspiration was the idea of old camera lens iconography. I worked through a few different, more classical lens shutter icons before settling on an abstract approach.

Performance

Things could always been improved and you can chase down minor improvements for a long long time. Overall though, Google PageSpeed is happy with scores of nearly 100 across the board for desktop (the intended profile). The results for mobile are also impressive if a bit lower for the performance metric.

PageSpeed Insights score for desktop showing four values: 99 for Performance, 100 for Accessibility, 100 for Best Practices and 100 for SEO
Performance = 99, Accessibility = 100, Best Practices = 100, SEO = 100

Performance reports from GT Metrix and YellowLabs also confirm the site is doing pretty well.

Likewise, the site HTML and CSS both validate without issues. Interesting to note that the WebAIM WAVE tool shows the thumbnail hover text as failing color contrast even when the color combination is well above the WCAG 2.2 Level AA ratio. Turns out it’s due to the opacity being zero. Once the thumbnail is hovered, the text opacity changes to 100%, so the error is a bit of a false positive.

Conclusion

Overall, I’m really happy with the way the site turned out. It feels very professional and I think I’ve lived up to the responsibility to tell the story. As the grandson of a history teacher and as a child born during the Vietnam War, I feel passionately about accurately telling stories like this for the future. Heck, I think I bought Bruce the web domain for Christmas back in 2008! It only took us 16 years to build something really great. Hopefully, it’s also worthy of Bruce and the all men and woman who served in the war.

View project

Structured

January 25, 2025

Updating my library for a better future

Visit the library

A screenshot showing HTML code with inline microdata.

Now, I’ve been around the web a while and I’ve played the SEO game with many different search engines. Consequently, along the way, I became a big fan of structured data. It just clicked in my head as an easy way to provide additional context to our content. A way to enable a richer experience without mucking up the presentation.

Of course, with the big boy on the block, that meant using the JSON-LD as that’s what they “recommend.” And to be fair, it is easy to add to a page and then to update and maintain — especially on large websites.

Of course, on my own little website, there’s not a whole bunch of data to…well…structure, so the JSON-LD is more of a general description of ownership. Still, I’ve been adding it to all my pages for a long time.

<script type="application/ld+json"> 
{
	"@context": "http://schema.org"
	"@type": "WebSite",
	"name": "Strength of One // Library",
	"url": "http://strengthofone.com/library/",
	"image": "http://strengthofone.com/soo-icon.png",
	"description": "Find out more about my virtual library.",
}
</script>

All fine and good. I was recently plugging along making updates to the library (to add and update links) when I came across an article on microdata for books. Andy Dalton‘s article for the HTMLHell advent calendar switched on the light bulb and I realized I had the perfect page for playing around with structured data in a more expansive way by incorporating microdata directly in the HTML.

When I first started learning about structured data (long ago), I remembered thinking microdata actually made more sense than JSON-LD because it was more closely aligned with the actual content. The main reason I never started incorporating it into my pages (beyond the Google influence) was that my content didn’t seem appropriate. I wasn’t writing up recipes, movie reviews, promoting events or anything that really fit the value pairs approach. With my library though, adding the microdata directly inline made perfect sense. Each library card is already presenting information that could be connected to a Schema.org value.

I started slowly with a few entries and validating each as I completed it, quickly getting the hang of it and gaining confidence. I think the only edit to the HTML structure I had to make was to wrap the author in a <span> element so that it connected with the itemscope in the parent <p> tag. From there, it became a production chore to copy/paste/find/replace for each book as I watched some football and relaxed. Here’s a sample book entry with the microdata in bold. (And yes, the fact that the code is not indented drives my designer brain crazy, but in this narrow blog column, it’s easier to read when the lines don’t wrap so much. I’ll probably write another post about the woes and wails of code formatting for designers.)

<article class="book-card fiction paperback" itemscope itemtype="https://schema.org/Book">
<div class="book">
<h2 class="title"><cite itemprop="name">The Lord of The Rings</cite></h2>
<h3 class="subtitle">Special Edition</h3>
<p class="author" itemprop="author" itemscope itemtype="https://schema.org/Person"><span itemprop="name">J.R.R. Tolkien</span></p>
</div>
<div class="book-details">
<ul>
<li class="category">F</li>
<li class="category" itemprop="bookFormat" itemscope itemtype="https://schema.org/Hardcover">HC</li>
<li class="link">
<a itemprop="url" href="https://openlibrary.org/books/OL38062258M/Lord_of_the_Rings" title="View at the Open Library">
<svg viewBox="0 0 24 24" aria-hidden="true"><title>View at the Open Library</title><path fill="currentColor" d="M3.9,12C3.9,10.29 5.29,8.9 7,8.9H11V7H7A5,5 0 0,0 2,12A5,5 0 0,0 7,17H11V15.1H7C5.29,15.1 3.9,13.71 3.9,12M8,13H16V11H8V13M17,7H13V8.9H17C18.71,8.9 20.1,10.29 20.1,12C20.1,13.71 18.71,15.1 17,15.1H13V17H17A5,5 0 0,0 22,12A5,5 0 0,0 17,7Z"></path></svg>
<span class="visually-hidden">View at the Open Library</span>
</a>
</li>
</ul>
</div>
</article>

I also took the opportunity to clarify the difference between “authors” and “editors” for several books. I removed the distinction from the content and put it into the microdata instead. This does seem wrong in a certain sense — providing the value for machines and obscuring it from humans, but it does make the design cleaner and more consistent. It’s probably something I’ll have to tackle in a future update though. The same is true with some of the other UI details such as the category labels in each book card. Looking at them with fresh eyes, I should probably include some sort of legend or maybe a tooltip. Along the same lines, I can now probably refactor the CSS a bit to style the content using the microdata attribute selectors instead of my custom classes (eventually).

Using structured data in this way also tracks with the “less Javascript is more” idea I’ve been working towards. Adding a <script> tag to the HTML for JSON-LD still feels like regular ol’ Javascript and as such, overkill (even if it isn’t).

Working through all the books, I then circled back for the comics which had a similar pattern but with a few differences unique to them. Namely involving how to indicate each issue number in the series. One or two more production sessions and I was able to update all those entries as well. Again, like with the author’s name, the only change to the HTML was adding an extra <span> for the issue number. Here’s an excerpt from one of the comic book cards.

<article class="book-card fiction comic" itemscope itemtype="https://schema.org/ComicSeries">
<div class="book">
<h2 class="title"><cite itemprop="name">The Uncanny X-Men</cite></h2>
<h3 class="subtitle" itemprop="hasPart" itemscope itemtype="https://schema.org/ComicIssue"><span itemprop="issueNumber">#170</span></h3>
<p class="author" itemprop="author" itemscope itemtype="https://schema.org/Person"><span itemprop="name">Chris Claremont</span></p>
</div>

<!-- more stuff -->

</article>

One of the weird things I noticed when reading up about using microdata versus JSON-LD was the criticism based solely on the the microdata making the HTML “messy.” And by weird, I mean ridiculous. Given the popularity of utility based CSS frameworks (ahem, Tailwind) filling HTML elements with long strings of classes, it’s a bit silly to criticize microdata for “messy” HTML. And by silly, I mean stupid.

Opinions aside, I’m really excited to get this all in place and to keep using it as I add more books. Even though search engines seem to be dying of self-inflicted AI wounds, I’m still a firm believer that structured data will always be beneficial. Somehow, adding this microdata makes the me feel like the the page is more robust and future-proof. Heck, if it helps one person find a book, it’s all worth it.

Visit the library

Local Archeology: 2024 Recap

January 11, 2025

A review of the progress and discoveries.

Read last year’s inaugural post or view the updated photo gallery.

A torn piece of paper with a musical score printed on it.

Coming out of last year’s post on the project and my efforts, I was feeling a bit defeated. There’s just so much to clean up that the task seems insurmountable and the effort insignificant. But I still love going out in the woods and I’m not going to ignore a problem I could help to solve. Overall progress was a bit lighter in 2024 for two main reasons.

  1. Recovery: After 2023’s massive effort, we needed to let the forest heal from the scars of removing all that large scale commercial debris. Overall, less disturbance to the forest meant less trash removed. This was particularly true over the summer.
  2. Terraforming: It seemed like the maintenance and upkeep of our own little forest wildflower yard garden took a lot more time than in 2023. I imagine this is somewhat cyclical and 2024 was just one of those years where the long term tasks all needed attention. It also seemed like a really hot summer which led to exhaustion earlier in the day and less hikes into the forest.

The Trash

With the reduced emphasis on tackling the commercial dumping, I was really just picking up small loose items to fill my pair of trusty five gallon buckets.

MonthBags (33 gallon)Buckets (5 gallon)
January4
February32
March111
April8
May6
June16
July1
August2
September3
October1
November
December413
Total1353

Using the same conservative estimates from 2023, 50 pounds per bag comes out to 650 pounds for the bags. For the buckets, 5 gallons at 8.33 pounds per gallon comes out to 41.65 pounds. The more conservative estimate per bucket is probably around 20 pounds. This leads to a total of 1,060 pounds of trash put in buckets. All combined for a total estimate of 1,710 pounds of trash removed in 2024. Nearly one ton (0.855) of trash removed from the forest. Not bad at all.

Given that the effort was not on the large scale commercial dumping grounds, but rather on the small bits of trash and debris, there weren’t many “wow” items pulled out. This also led to fewer photos in the gallery, because…once you’ve seen a photo of a beer can, you don’t need to see them all. One fun item was the cast iron water main cap that I’m now using as a cigar ashtray.

  • Car seat (2): Not a child safety seat, but actual seats from cars with metal frame, fabric, foam, etc.
  • Bottles
  • Cans
  • Plastic
  • Styrofoam
  • Broken glass
  • Paper
  • Scrap metal
  • Construction debris
  • Golf balls
  • Toys
  • Balloons
  • Tools

The Design

No significant changes to the design of the gallery this year. I’m still happy with the look and feel, so it was just a matter of updating the typography for the year subheads.

The Development

The code for the gallery did get an overhaul this year, most notably from my recent foray into using srcset on the img tags. Taking what I learned from building my new photo page, I started creating the different necessary image sizes, but decided not to go as crazy. The grid items in this design are much more consistent across larger viewports, so I didn’t need to create as many image options for the browser. I settled with three sizes: 220px, 460px and 1440px for the modal.

<img srcset="img/02-10-2024-220w.webp 220w, 
    img/02-10-2024-460w.webp 460w" 
    sizes="(max-width: 479px) 460px, 220px" 
    loading="lazy" src="img/02-10-2024-220w.webp" 
    data-img="img/02-10-2024.webp" 
    alt="A silver dented car wheel hubcap." class="thumb" loading="lazy">

I also found another responsive image testing tool, but it seems…too opaque? There’s really no indication what it’s testing against or what’s going on. I remain skeptical.

This change also meant a rewrite of the Javascript for the modal popup. Previously, the modal grabbed the image source path and used it. With multiple image sources — and wanting to have a larger image size just for the modal — I switched to pulled the correct image path from a new data attribute. Data attributes are great and I really find them to be helpful in connecting the dots between design and function.

This works great, but does introduce a bit of a delay as the image loads — particularly on subsequent images. The first time you click on an image, it loads fine, but close it and click on a second image and you’ll see the first image appear briefly before the second image loads. I’m not sure if it’s due to the lazy loading I added to all the images or just a caching issue or something more granular with the JS where data attributes take longer to load than a direct image path. In any case, it’s something I can live with (for now). Perhaps I can modify it with a CSS transition to make the image swap seamless instead of so jarring.


As an aside:

This is a common scenario for web designer/developer/engineers in the corporate world. “I can’t change the functional code, so I’ll try to improve the interface with a bit of slight of hand to make the experience less irritating.”


There were also some other small performance upgrades as well.

  • Switched to serving the main typeface locally rather than via a third party.
  • Changed all the image formats from JPG to WebP. (I kind of wish we could start using the JPEG XL format, but there’s not enough browser support.)

All of these updates also meant going back and updating all of last year’s content. All new images sizes and code edits for all of the 2024 images. Not difficult, just work. Good to do while watching sports or binging some other trash TV. Just a little visual/auditory ambient distraction as you tackle each little production task.

Looking Ahead

The new year has started with a burst of renewed energy and it looks like I’ll be able to clear out a ton (literally) more trash this year including more of the large scale commercial debris items. Although, it’s harder to find ground items in the winter, the lack of vegetation allows for more visibility into areas that are completely overgrown in summer. Plus, there’s no (or very little) home terraforming chores to handle. I’ve already scoped out two new areas in need of help and added them to my ongoing list. I’m mostly waiting for the ground to thaw out before starting. Here’s to making your little piece of the world a little cleaner, a little brighter and a little healthier for the future.

Please don’t litter.

View the updated photo gallery

Instant

December 26, 2024

Another step in recapturing content: Bringing photos back in-house.

With the ongoing enshittification of the platforms we once trusted, I took another step towards untangling myself from corporate social media. I finally committed to stop posting on Instagram and return to posting my photos on my own site.

View the photo page.

Background

After years of immersing these platforms into out digital lives, it’s unsurprisingly tedious to extract yourself from their tangled grasp. I’ve been dutifully moving certain projects off of Instagram and over to their own pages. The one big theme that is still hanging around is…the more general “photo” theme.

Design

Nothing too fancy in terms of the page design: fit into the site’s theme, basic full width grid for the photos and easy to update. Luckily, I’ve been enamored with the square aspect ratio for a while, so the layout was simple. Letting the photos stand on their own was also the point, so there’s no titles or copy to account for in the design.

Development

Since the design is so simple, I did want to add a bit of complication to the development side. I’ve been playing around with a few image gallery options, but didn’t really want to add the overhead of Javascript. Instead, I decided to use the page as an opportunity to once again tackle the image srcset property. Now, I’ve used srcset in production before with the picture element, but it’s always been confusing. In this case, I wasn’t interested in art directing the photos across viewport sizes or layouts, but merely trying to provide alternative sizes for better performance. This made adding the srcset property to the img element the best option.

As I mentioned, the spec for the property never seems to stick in my head. The production part is easy enough: create different sizes of each photo based on what size is needed in the layout for different viewports. For the code itself: add a list of source paths to the images along with media queries in the sizes attribute to give the browser a hint as to which image to use. Plus, provide a default fallback image path. It looks fairly straight forward, but there’s a bunch of interacting parts that always confuse me. Here’s my initial attempt with two image sizes for 480px wide and 720px wide. (Don’t forget the alt text either!)

<img srcset="blue-420w.webp 420w, blue-720w.webp 720w" 
    sizes="(max-width: 768px) calc(100vw - 4rem), calc(100vw - 6rem - 224px)"
    loading="lazy" src="blue-420w.webp" 
    alt="A dark blue sky in the top left with the silhouette of bare trees coming in from the bottom right.">

But…once you try to test it out, things start getting more complicated. Looking at the browser dev tools as you resize the window appears to load the correct image, but it also seems to show the incorrect size when you hover over the img element. The dev tools network tab does a better job, but still doesn’t adjust on resize or provide any insight into how the browser is processing the srcset setup.

It’s worth mentioning that the other reason I started with two sizes is…laziness. The performance value of srcset (and the picture element) run up against the additional production work of creating all the different image sizes. After 14 years on Instagram, the sheer volume of photos to process is daunting. Even discounting some of the other themes and culling many photos altogether, it was a bit to much to think about at the start. Starting small, cherry picking just a few images and just tackling two sizes kept the project moving.

Right. Basic layout and images formatted and set up in the HTML. Now on to that testing conundrum. It quickly became clear that the combo of my two images sizes and the media queries weren’t really helping anything. Turns out the layout didn’t really help simplify things either. The three column layout necessitated smaller images at larger viewport widths and the one column layout (below 768px wide) required large images initially — and then smaller images as the viewport shrank. So…two layouts needing three (or more) media queries to load two (or more) image sizes. This is where my head starts to hurt and I start to think, “there should be a tool for this”.

A bit of research and luckily, I was able to find this linter for responsive images to help my debug and build better media queries. Just run the bookmark (or Chrome extension) on your page and it will run through the images and conditions to make recommendations. In looking at my page, it recommended two different sets of potential image sizes and a new set of media queries. As I suspected, it meant creating more than just two image sizes, so I relented and braced myself for more production work. I went with the more lengthy size list recommended by the linter which meant abandoning my previous two sizes. Here’s the new code:

<img srcset="blue-256w.webp 256w, 
    blue-609w.webp 609w, 
    blue-888w.webp 888w, 
    blue-1240w.webp 1240w, 
    blue-1510w.webp 1510w, 
    blue-1740w.webp 1740w, 
    blue-1940w.webp 1940w" 
    sizes="(min-width: 780px) calc(33.33vw - 32px), 
	(min-width: 380px) calc(100vw - 64px), 
	calc(60vw + 80px)" 
    loading="lazy" src="img/photos/blue-609w.webp" 
    alt="A dark blue sky in the top left with the silhouette of bare trees coming in from the bottom right.">

So now there are seven image sizes to pair with the three media queries. A bunch more production work indeed. This immediately took me back to using the picture element in production, i.e., creating tons of different sizes for each image. I ran the linter again and it was happy, but I’m still not satisfied with the testing scenario. I think there’s an opportunity for browsers to really clarify which condition is being met and why it’s choosing a particular image size. Maybe I just need to start using a browser just for development.

The linter did seem to get confused on a couple of occasions. The first issue occurred as I added more and more images. Using the bookmarklet in Firefox caused the linter to hang and eventually crash. I suspect it’s just trying to handle too many options and runs out of memory. Switching over to Chrome and using the extension worked fine and didn’t crash. The other issue was a bit more esoteric. For a few of the images, the linter gets confused and thinks I’m using the srcset image list for art direction (using images with different content) when I’m not.

Images in srcset attribute must not be different

It seems the image name-609w.webp doesn’t show the same contents as name-1940w.webp does, the determined difference is 5%.

This is an issue I wasn’t able to resolve. Recreating the image sizes didn’t help. I can only suspect that the linter is choking on the pixels values between the small and large sizes. Perhaps it has something to do with the compression.

From there, it was just a ton of art direction to curate the next row of images, size them all, compression, format swap, update the HTML and repeat, repeat, repeat. Perhaps at some point, I set up a Photoshop action or Automator workflow to batch process the images, but that’s another project. I think the hardest part has actually been finding the original files. I end up cross referencing my phone with my export from Instagram and various external hard drives only to often discover a specific photo is only in one or two of those locations. Sigh.

Testing & Next Steps

Looking at the PageSpeed performance and it seems like Google is happy. Here’s the desktop score. The mobile score is almost identical with only the performance score being a tiny bit lower at 94.

Google PageSpeed score showing 100 points for Performance, Accessibility, Best Practices and SEO.

PageSpeed is still mad at me (naturally) as the volume of photos keeps adding up to a large payload, but whatever, it’s my page, the images are set to lazy load and hopefully visitors are taking their time to actually look at the photos. Perhaps, I’ll limit the amount of images on initial page load and add a “show more” button to reveal more. Might be a good UX update to play around with in a future update.

I’m also not sure if PageSpeed is accurately evaluating the responsive images sizes. It’s still saying I need to size the images properly. Admittedly, I will need to go back and work on the image compression again to help reduce the overall page weight. I suspect I may need to add another image size to the srcset list and adjust the media queries to handle it. I’m just not convinced its working at it’s optimal best yet. I’ll also do more extensive performance testing with other tools to get comparison metrics.

Looking ahead, there’s still many more photos to process, so like the virtual library, I’ll add this page to my monthly backlog to update as I find time. I’m sure I’ll also need to adjust the layout for photos with different aspect ratios as I’m hoping to dive back into actual film photography soon. For now, this page is just one small step ahead to reclaim my content so keep checking back to see what I post.

View the photo page.

Pattern Pattern

May 11, 2024

Drawing (and redrawing) the patterns from some old favorite clothes.

View project page

Two repeating patterns, side by side, both drawn from old textiles

Digging around the dusty corners of an old hard drive when I came across a pattern I had drawn way back when. The pattern on the left was actually from one of my favorite 1950s style jackets. I don’t know where the jacket went, but I do still love that pattern. The one on the left is from a shirt I still own. More on that pattern later.

When these were originally drawn, I did them in Adobe Illustrator and saved them as Illustrator swatches which is kind of telling as to how long ago it was. Repeating patterns as swatches were new and exciting and Illustrator was the main tool of choice in my work. There wasn’t a whole lot of web design work back then and print ready vectors were mandatory.

Like a lot of my recent archive discoveries, once again, the project was how to update these files for the web.

The Design

Since the patterns were originally swatches, I used that idea as the main design feature. I’ve been creating a few varieties of UI cards for work, so I had a head start on what I needed. Card interfaces are all over the web, but I definitely wanted these to be closer to Pantone or paint chips. There’s a couple of different card options to show the pattern specs, the full repeating pattern and then the individual colors.

For the typography, I wanted something a bit retro to reflect the 1950s period, but without looking too stylistic. I also wanted to branch out from the usual sources and find some new fonts. Being able to self-host the files and if possible, use a variable font were also on my requirement list. Luckily, I found everything over at Fontshare. The main heading is set in New Title and the subheads are set in Clash Grotesk. The really narrow letterforms of New Title definitely had that retro feel I was looking for. I’m also still completely enamored with big chunky sans serif faces, so Clash Grotesk definitely caught my eye. All the rest of the text is set to a general system sans serif for speed and ease (or laziness while coding if you like).

For the colors, I’m still stuck on using the Flexoki palette as I love the warmer print-like tones. Once again, I didn’t really use the full palette, but that’s okay. I also didn’t go overboard and create multiple color themes. Just dark theme for this page, thank you very much. Now, the colors used in the patterns themselves are not from Flexoki and were eyeballed from the clothes themselves as part of the drawing process. It’s also worth noting that the color names used in the patterns are totally made up marketing copy. They’re not accurate in any way. I just wanted some fancy titles for them. Use the actual color values if you want to replicate them.

The page design is also the same basic template I’ve been using for these little projects: header, main, footer. Nothing crazy as it let’s me quickly get to the fun stuff.

The Patterns

The quickest way to get the patterns ready for the web was to export them out of Illustrator as SVG files. Add in a bit of compression and optimization and the SVGs were ready. I’m using the plural “patterns” here, but I should clarify. The whole project idea started when I found the first jacket pattern, but as I started working, I found I had previously drawn another textile pattern back in 2012. This was the shirt pattern. Turns out my love of patterns has taken many forms. Even though I had blogged about the shirt pattern, I had only provided it as an Illustrator swatch, so including in this new page seemed like an obvious next step. Creating SVGs for the web is pretty standard fare these days and while it’s one of my favorites, I did want to push things a bit further. Hence, some code exploration.

The Code

General page design is still based on a responsive CSS grid with three columns and a few column/row spans for the larger cards. Nothing fancy. The real challenges came when I started looking closer at the jacket pattern. It’s essentially a pixel pattern — which made me thing of a grid pattern — which made me think of CSS grid — which made me do something silly. I created the entire pattern as a CSS grid with each cell representing a pixel on the pattern. Of course, the pattern is 17 x 17 for a total of 289 squares.

(Why the original pattern was 17 x 17 is beyond me, but it made the math more painful than necessary. I actually typed up a cheat sheet with the numbers for easy reference.)

Yep, that’s right. I created a grid with 289 empty <div> elements. Genius, I know. I then set up background colors for the red and black squares with a massive list of :nth-child() statements. Being responsive, it does get squashed when the viewport is resized, so beyond the insanity and performance issues with all those DOM nodes, it’s not an ideal method. The SVG is still better.

Just because it’s possible, doesn’t mean it’s a good idea. But knowing that didn’t stop me from trying another crazy idea. I’ve been playing around with creating background images with multiple linear gradients. Knowing that we can create hard lines between colors with stops in the gradient, I set out to draw the 17 x 17 jacket pattern as a gradient. This method, like the massive <div> list, was an effort in persistence. Getting the pixels lined up was a complete pain, not to mention all the sizes for the 17 gradient rows, but I did learn a bit more about these complex background images. I even went a step further and created a version of the gradient using percentages instead of pixels. This certainly helped and brought closer to the CSS grid version in terms of flexibility, but also the same limitations. In the end, I didn’t even include an example of the gradient on the page although I did keep it in the CSS file for reference later.

Learning all those lessons, coupled with the more intricate pattern of the shirt, I didn’t even attempt either a grid or gradient solution for the second pattern. It might be possible, but after all that experimentation, I didn’t have the patience for more.

Overall, I’m happy with the lookbook design and chip cards. Getting these old projects some new life in a new format is always fun too. Head on over check it out and download the pattern files.

View project page

Civil Defense

July 1, 2023

Redrawing the original set of civil defense badges from the United States Civil Defense Corps created in the 1940s.

View the project page.

As has become apparent over the last few years (and even decades and centuries before), we are each responsible for our own safety as well as the safety of our communities. It seems this is a lesson we forget and relearn in a cycle of storms, accidents and tragedies. This concept of mutual aid is once again in vogue and becoming more prominent as we begin to see epic environmental changes occurring in our own neighborhoods. When “once in 100 year” events start happening every year, we’ll need to pool our resources and efforts to survive the floods, droughts, storms, fires and freezes.

Luckily, we’ve done this before. During WWI, the U.S. government started what would later become known as the Civilian Defense Corps.

A set of three civil defense logos from the illustration project.

Reading up on the history, I was excited to see that each group had their own badge and more importantly — they were awesome. Really great, simple yet functional pieces of graphic design united in theme and purpose. Unfortunately, as I searched further, I couldn’t find much about the logos nor could I find the badges themselves. So I set out to redrawn them based on an original document from the period. And that inspiration provided another opportunity to work on a little web design and development to showcase the badges.

The illustrations

Redrawing the badges looked difficult at first, but creating a base template for the logo with the appropriate shapes and colors streamlined the process. Are the final results 100% identical to the originals? No, but they are very close and faithful. I didn’t make any creative or editorial decisions (suppressing all my art director experience — I really wanted to realign each). Each was then exported to SVG and optimized for the web.

Design and development

Now that I’ve got a few of these one off pages created, the ramp up process is much quicker. It’s not quite a template, but it is a starter pack of sorts. The Rorsch project was a great starting point given the retro vintage styles used. But…I didn’t want that much retro. I didn’t need the super distressed paper, but did still want the look and feel of that old manual. I pulled a sample of the paper from the manual to set as a background-image and then applied a background-blend-mode of multiply against the base background color. I did the same for one of the standard grit textures I have on file, but with a different blend mode. Being able to set multiple background images via CSS is a real game changer in terms of design possibilities. I’ve used the same technique, but with gradients, in other projects to draw complex backgrounds instead of using a raster file.

The big change for this project was to try out using an SVG filter to add a layer of noise over the entire page. I tried a couple of different ways to get it to work properly and finally ended up putting the SVG filter into the CSS as a background image and then setting the mix-blend-mode to hard-light. I dropped the CSS class onto the html element so that it covered the entire page. Along with the fractal noise distortion, the hard light mode did change the colors a bit, but it works as it almost de-saturates them a bit enhancing the retro style.

For the type, I wanted to match the styles in the PDF source, but…I couldn’t go as far as to use that cursive subhead font. It was just too much. The sans serif headers are Alternate Gothic to get that compressed letter width. The serif is FF Seria and despite only being used once for the intro text, I still wanted to try to match the style of the manual. Both are served via Adobe fonts which isn’t ideal from a performance standpoint, but it is convenient. I do wish Adobe would come up with a self-hosting option for customers.

All the SVG files are loaded via a standard img tag with a figure tag which allows adding a figcaption tag inside to drop in the badge name.

Packaging up the set of SVG files for a download was the final step. Hopefully, we don’t need them in the future, but it’s always a good idea to be prepared!

View the project page

Xtra

March 29, 2020

Photo of an ITT Xtra computer circa 1984

I have been using computers for over 35 years. This is quite a statement and one that makes me appreciate all the bad ergonomic decisions I’ve made along the way. In fact, it was a diagnosis of tendinitis in my elbow and subsequent introductory questions from a doctor that made me even realize that my relationship with computers had been going on so long.

“Do you use a computer for work?” Yes.

“How long have you been using a computer?” Uhhh, years? Decades? A long, long time.

And I didn’t really have an exact answer. Which got me thinking, because it seems like something one should know about oneself. I’m in the first generation when it comes to personal computers (which may lead to another longer post topic).

The ITT Xtra is the first one I ever really explored. My parents bought it one Christmas and I can still remember first seeing it set up on the kitchen table. No box, no wrapping paper, just sitting there like an alien. Now, to be fair, I had been using computers for some years before this — they were starting to show up at friend’s houses and my middle school had purchased a bunch of TRS-80s to stick in a science class.

But there’s a difference between “using” and “exploring”. It wasn’t until this showed up in my house that I was able to spend unlimited hours learning what it could (and couldn’t) do and by extension, what I could (and couldn’t) do with it. My siblings were too young to care much about it and my parents were too busy to learn it, so I became the default user for the house.

“Give ITT a round of applause for including clear, profusely illustrated documentation with the Xtra. This little extra touch is worth it’s weight in gold.”

Creative Computing, 1985

ITT Xtra computer with manuals and extra hard drive

Not an actual photo of my Xtra, but those manuals were fantastic!

It was that “profusely illustrated documentation,” as seen in the photo above, that gave me any chance of understanding how the computer worked. Beyond using the word processing program to write all my school papers, it was here that I began to make the early connection between code and art. The idea that you could program the computer to make art. This was an astonishingly profound revelation — that graphics on screen were directly tied to code — basically text. It wasn’t long, and it was probably the first thing I wanted to do with the computer, before I was writing screen graphics in BASIC.

Seeing the light

One of the coolest features was, of course, the amber monochrome monitor. While I may have had some initial trepidation because I had never heard of the ITT brand before, I was immediately won over when I learned that amber monochrome monitors were easier on the eyes than the traditional green monochrome monitors that everyone else had. Remember, I was a teenager with all the misguided fears, self-doubt and imagined peer pressure that accompanies those years, so having something — anything — uniquely cool was a big deal. Even if the claims about amber monitors weren’t necessarily true.

“An amber screen was claimed to give improved ergonomics, specifically by reducing eye strain; this claim appears to have little scientific basis.[3] However, the color amber is a softer light, and would be less disruptive to a user’s circadian rhythm.”

Wikipedia

So where does all this nostalgia lead us?

It’s led me to create a new page on the site using the style of my old original amber monochrome monitor. It’s a flashback to what it was like for me to work on that first ITT Xtra computer — amber, bitmap fonts on a black background. To that end, I’ve been toying with the idea of including a client archive page on the site. These two ideas were perfect for each other — a long list of text in a table with a retro design style.

The design itself is by it’s very nature, basic. I started by finding the perfect shade of amber, not necessarily in terms of accuracy, but more on an emotional basis dredged up from some sort of color memory library. Next up, was choosing the perfect bitmap font and I do love a good bitmap font. In this case, historical accuracy won over nostalgia as I was able to find the exact font files used on the ITT Xtra. From there, it was more of a matter of web development to bridge the gap between the old and new.

Now of course, it’s not an exact replica to the original, mostly due to the underlying technologies involved. The two big differences being the monitor construction and the text rendering engines in modern computers. Those old monochrome monitors made for much crisper text. So the way your laptop screen is built makes the text a little blurry (on the plus side, it has more than one color). The second difference is how your computer renders the text itself — usually via anti-aliasing or subpixel rendering. While I can’t overcome modern displays, I have included some CSS to help recreate that old text rendering.

Along the same lines, I’ve also chosen to use another old technique for the page — loading the client list data via XML and a deprecated Javascript library from Adobe called Spry. Now, admittedly, this has nothing to do with my old ITT Xtra, but it was a fun bit of nostalgia to use again. Old school web designers will fondly remember (or not) that thirteen years ago, loading data into a web page (without a database!) was super cool. At some point, I may swap this out for actual on page data which would be better for performance, accessibility and longevity.

In the meantime though, and without further ado, venture back in time to experience the dawn of personal computing.