photo galleries
documenting the process from lightroom to aws to quarto web pages
Lightroom
I’ve accumulated far too many photos over the past couple of decades: my Lightroom library counts over 84k photos, stored on 2 different external hard drives (and a NAS + Amazon glacier as backups). I’ve been relatively organised in creating Collections and sub-collections, which at some point were sync’ed as Flickr albums. But I better not get started on Flickr.
Adobe, come to think of it, do I prefer to talk about Adobe?, “convinced” me to shed money on their annual subscription for LR, which now comes in two versions: classic, and quantum(?). I prefer to stick with the former because, much like a cat trapped in a dark room, I don’t know when the the poison will get released.
Taking one example, the collection “NZ” has a sub-collection “kaikoura” with 6105 photos (yes, that’s a lot, and most are bad, but who has time to decide which to keep or delete). That’s clearly too much for an online album, and too much wasted bandwidth if I were to sync it with online storage, so I start by duplicating the album (virtual copy) into “kaikoura-sync”, and trim down the selection to a more manageable 42 photos (I aim for less than 50). I then opt in to sync this album as a Lightroom CC |ket>, which lets me create an online album automatically:
https://bapt.myportfolio.com/kaikoura-sync
That’s alright, I guess, but I don’t like letting Adobe decide on the layout for me, or having my images tied to their quantum platform and ridiculous ladder subscription model.
So the next logical step is to create my own web gallery, which involves:
- exporting albums from LR
- storing the photos on amazon S3
- generating static galleries with quarto
Exporting albums from LR
I create a new “Published Service” (takes about 5 attempts, because you can’t change settings if you change your mind; nicely done, Adobe).
Note the indexed filenames, which preserve the custom order defined in the LR collection.
LR exports bulky jpgs, so I pass them through ImageOptim, which typically reduces their size by 70% to something that seems a good compromise between viewing quality and load.
Storing photos on amazon S3
Full disclaimer: I’m really only writing all this as a way to hopefully remember the gymnastics involved in connecting to S3.
pip install aws-shell
aws-shell
First run, creating autocomplete index...
s3 ls
Unable to locate credentials. You can configure credentials by running "aws configure".
well that’s a good start. Running aws-shell configure lets us store the credentials in ~/.aws/credentials
[default]
aws_access_key_id = XXXXXXXXXXXXXXXXXXXX
aws_secret_access_key = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
so that’s that. Pro-tip: store it somewhere; I got it tattooed on my left arm, but under the sleeve for extra safety. Thieves will have to cut my hand off, I hope they appreciate the irony.
With this unexpected access to my stuff, I can now list the bucket,
aws> s3 ls
2021-08-02 01:48:26 my-website
Yeah. Navigating there through the AWS web interface suggests I did upload folders there. With a command line as I recall.
aws s3 cp photos/kaikoura s3://my-website/photos/ --recursive --exclude "*.dng"
(not that I have dng there, but I’d never remember the syntax if I did need to exclude something one day)
aws s3 cp photos/kaikoura s3://my-website/photos/ --recursive --exclude "*.dng"
upload: photos/kaikoura/web-04.jpg to s3://my-website/photos/web-04.jpg
upload: photos/kaikoura/web-01.jpg to s3://my-website/photos/web-01.jpg
...
upload: photos/kaikoura/web-42.jpg to s3://my-website/photos/web-42.jpg
Success! Though, unfortunately, I forgot to run a file optimisation first. Let’s see how I can update files. Update: sounds like you cannot. Better get it right the first time.
aws s3 rm s3://bapt-website/photos/kaikoura --recursive
# the redo
aws s3 cp photos/kaikoura s3://bapt-website/photos/ --recursive --exclude "*.dng"
The 42 files take 5.8Mb which sounds reasonable (?) for a page. Maybe at some point I’ll want to look into those CDN that serve various versions of the file at different sizes and resolutions.
Note that accessing the file is prohibited
https://my-website.s3.ap-southeast-2.amazonaws.com/photos/kaikoura/web-01.jpg
<Error>
<Code>PermanentRedirect</Code>
<Message>
The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint.
</Message>
</Error>
(I forget how exactly this was set up, but I hear it’s a good thing)
Access is done through Cloudfront, which I guess means it’s an extra layer of cushioning for safety,
so the file is accessible at
https://xxxxxxxxxxx.cloudfront.net/photos/kaikoura/web-01.jpg
Generating the page with quarto
I now have a bunch of photos available at <cloudfront-url>
, which mirrors the local album directory. I can list those local files, wrap them in a gallery template, and prepend the cloudfront url.
Because my goal is to reproduce a few dozen album pages, I decided to automate the process with a template (in case I later decide I’d like to tweak something for all of them).
Template for photo albums
I define the following template as a string to pass to glue()
<- "
tpl ---
format:
html:
theme: litera
css: ../gallery.css
---
::: gallery
::: column-page
````{r, results='asis', echo=FALSE, warning=FALSE, message=FALSE}
library(glue)
library(fs)
library(here)
photos = fs::dir_ls(path = path('<local>', '<album>'), glob = '*.jpg')
baseurl = 'https://xxxxxxxxxxx.cloudfront.net'
for (i in seq_along(photos)){
cat(glue('![]({{ baseurl }}/photos/< album >/{{ fs::path_file(photos[i]) }}){style=\"column-span: none;\" group=\"<album>-gallery\"}', .open = '{{', .close = '}}'), '\n')
}
````
:::
:::
"
where <album>
will be iterated through a list of photo albums such as kaikoura
in this example.
The photos corresponding to this particular album are listed via the R chunk, and formatted as
![](https://xxxxxxxxxxx.cloudfront.net/photos/kaikoura/web-01.jpg){style="column-span: none;" group="kaikoura-gallery"}
![](https://xxxxxxxxxxx.cloudfront.net/photos/kaikoura/web-02.jpg){style="column-span: none;" group="kaikoura-gallery"}
[...]
![](https://xxxxxxxxxxx.cloudfront.net/photos/kaikoura/web-42.jpg){style="column-span: none;" group="kaikoura-gallery"}
(note the second use of glue
within the chunk (itself a part of the template), which requires different delimiters.)
The script _generate_albums.R
produces such .qmd
documents for the whole list of albums.
Responsive css gallery
I used to use DeSandro’s Masonry javascript library to arrange the photos in a responsive “masonry” layout, but the interaction with a javascript lightbox got a little tricky. It seems easier nowadays to just rely on CSS, and here I’m using
.gallery > div {
columns:16em;
gap:0.5rem;
}
.gallery img {
display:block;
width:100%;
}
to arrange the photos column by column. It’s not entirely ideal, because I would prefer if the layout was filled rowwise (to follow the order I set for the photos), but that’s a compromise I’m happy to make for the sake of simplicity. Here’s what the kaikoura album looks like (with a few minor tweaks to the above template):
rss feed for new albums
Since this photo website isn’t organised as a blog, I didn’t find an obvious way to set up an automatic RSS feed. I’m planning to add one album per week this year, so it would be nice if a visitor had a way to keep track of the updates.
I thought about creating a yaml file where I would manually list the new albums one by one, but it appears that the RSS format is quite picky about the published date format, so I resorted to generating the valid XML via an R script.
library(yaml)
library(minixml) # devtools::install_github("coolbutuseless/minixml")
library(anytime)
<- function(file = 'index.xml'){
generate_empty_feed <- xml_elem("rss", version="2.0")
doc <- doc$add('channel')
channel $add('title', "Photography")
channel$add('link', "https://photo.bapt.xyz/")
channel$add('description', "My latest photo pages")
channel$add('lastBuildDate', rfc2822( anytime(Sys.time()) ))
channelcat(as.character(doc), "\n", file= file)
}
<- function(file = 'index.xml'){
import_feed <- readLines('index.xml')
rl ::parse_xml_doc(paste(trimws(rl), collapse = '',sep=''))
minixml
}
<- function(feed,
new_entry album = 'kaikoura',
description = paste("photo album of", album),
title = paste("New album:", album),
link = paste0("https://p.bapt.xyz/", album),
guid = paste0("p.bapt.xyz/", album),
pubDate = rfc2822( anytime(Sys.time()) )){
<- feed$children[[1]]
channel <- xml_elem('lastBuildDate', rfc2822( anytime(Sys.time()) ))
lastBuildDate $children[[4]] <- lastBuildDate
channel<- xml_elem('item')
item $add("title", title)
item$add('link', link)
item$add('guid', guid)
item$add('pubDate', pubDate)
item$add('description', description)
item<- xml_elem("enclosure", url=glue::glue("https://p.bapt.xyz/images/{album}.svg"),
encl length="12345", type="image/svg+xml")
$append(encl)
item$append(item)
channelreturn(feed)
}
<- function(feed, file = 'index.xml'){
update_feed cat(as.character(feed),"\n", file= file)
}
# generate_empty_feed() # only first time
<- import_feed()
feed
<- new_entry(feed, album = 'kaikoura')
new_feed
new_feed
update_feed(new_feed)
The first pass generates an empty feed,
rss version="2.0">
<channel>
<title>
<
Photographytitle>
</link>
<
https://photo.bapt.xyz/link>
</description>
<
My latest photo pagesdescription>
</lastBuildDate>
<
Thu, 05 Jan 2023 20:12:26 +1300lastBuildDate>
</channel>
</rss> </
and at every update, this file is re-imported, the lastBuildDate
updated, and a new entry appended to the channel, such as,
item>
<title>
<
New album: kaikouratitle>
</link>
<
https://p.bapt.xyz/kaikouralink>
</guid>
<
p.bapt.xyz/kaikouraguid>
</pubDate>
<
Wed, 04 Jan 2023 17:03:59 +1300pubDate>
</description>
<
photo album of kaikouradescription>
</enclosure url="https://p.bapt.xyz/images/kaikoura.svg" length="12345" type="image/svg+xml" />
<item> </
The “enclosure” field is one of the ways to add an image to the RSS feed; the byte-length doesn’t need to be accurate, apparently.
Now I only need to update those few albums one by one in LR :)