Chris Padilla/Blog
My passion project! Posts spanning music, art, software, books, and more. Equal parts journal, sketchbook, mixtape, dev diary, and commonplace book.
- Set up GA4 at analytics.google.com
- Take your GA4 ID over to Tag Manager and create a new GA4 Config Tag.
- Use that config tag in your new custom events.
- Provide the Request URL for your API
- Create a New Shortcut "Suggestion Box"
- If loading data for select menus, provide an API URL for that as well.
- Instantiate your App with Slack Bolt
- Write methods responding to your shortcut callback ID
- Handle submissions.
- Redux stores both Application State and Fetched Data
- Redux Thunks are used to asynchronously fetch data from our Sanity API
- We hope nothing goes wrong in between!
- A Redux action for storing the data
- A query method that wraps around our Sanity GROQ request
- A way of handling errors and missing data
- An easy way to call multiple queries at once
- Hosting your fonts
- Converting font to modern .woff2 format if not already
- Caching fonts in a CDN
- In the future 🪐: using F-mods, a method for matching the fallback font dimensions with the designed font
Debouncing in React (& JS Functions as Objects)
Debouncing take a bit of extra consideration in React. I had a few twists and turns this week working with them, so let's unpack how to handle them properly!
Debouncing Function in Vanilla JS
Lodash has a handy debounce method. Though, we could also just as simply write our own:
const debounce = (function, timeout) =>{
let timer;
return (...args) => {
clearTimeout(timer);
timer = setTimeout(() => { function(args); }, timeout);
};
}
In essence, we ant to call a function only after a given cool down period determined by timeout
.
Lodash comes with some nice methods for canceling and flushing your calls. They also handles edge cases very nicely, so I would recommend their method over writing your own.
const wave = () => console.log('👋');
const waveButChill = debounce(wave, 1000);
window.addEventListener('click', logButChill);
// CLICK 50 TIMES IN ONE SECOND
👋
With the above code, if I TURBO CLICKED 50 times per second, only one click event would fire after the 1 second cooldown period.
React
Let's set the stage. Say we have an input with internal state and we want to send an API call after we stop typing. Here's what we'll start with:
import React, {useEffect} from 'react';
import {debounce} from 'lodash.debounce';
const Input = () => {
const [value, setValue] = useState('');
useEffect(() => {
expensiveDataQuery(value);
}, [value]);
const expensiveDataQuery = () => {
// get data
};
const handleChange = (e) => {
setValue(e.currentTarget.value);
};
return (
<input value={value} onChange={handleChange}/>
);
};
export default Input;
Instead of fetching on submit, we're set to listen to each keystroke and send a new query each time. Even with a quick API call, that's not very efficient!
Naive Approach
The naive approach to this would be to create our debounce as we did above in within the component, like so:
const Input = () => {
const [value, setValue] = useState('');
useEffect(() => {
fetchButChill(value);
}, [value]);
const fetchButChill = debounce(expensiveDataQuery, 1000);
. . .
}
What you'll notice though, is that you'll still have a query sent for each keystroke.
The reason for this is that a new function is created on each component re-render. So our timeout method is never cleared out, but a new timeout method is created with each state update.
useCallback
You have a couple of options to mitigate this: useCallback
, useRef
, and useMemo
. All of these are ways of keeping reference between component re-rendering.
I'm partial to useMemo
, though the react docs state that useCallback
is essentially the same as writing useMemo(() => fn, deps)
, so we'll go for the slightly cleaner approach!
Let's swap out our fetchButChill with useCallBack
const Input = () => {
const [value, setValue] = useState('');
useEffect(() => {
fetchButChill(value);
}, [value]);
const fetchButChill = useCallBack(debounce(expensiveDataQuery, 1000), []);
. . .
};
Just like useMemo
, we're passing in an empty array to useCallback
to let it know that this should only memoize on component mount.
Clearing after Unmount
An important edge case to consider is what happens if our debounce interval continues after the component has unmounted. To keep our app clean, we'll want a way to cancel the call!
This is why lodash
is handy here. Our debounced function comes with method attached to the function!
WHAAAAAAT
A fun fact about JavaScript is that functions are objects under the hood, so you can store methods on functions. That's exactly what Lodash has done, and it's why we can do this:
fetchButChill(value);
fetchButChill.cancel();
fetchButChill.cancel();
will do just that, it will cancel out debounced functions before being called.
Let's finish this up by adding this within a useEffect
!
const Input = () => {
const [value, setValue] = useState('');
useEffect(() => {
fetchButChill(value);
return () => fetchButChill.cancel();
}, [value]);
const fetchButChill = useCallBack(debounce(expensiveDataQuery, 1000), []);
. . .
};
Migrating Tag Manager to Google Analytics 4
Code Set Up
If you're using Google Tag Manager, you are already set up in the code to be funneling data to GA4. Alternatively, you can walk through the GA4 Setup Assistant and get A Google Site Tag. It may look something like this:
<script async src="https://www.googletagmanager.com/gtag/js?id=G-24HREK6MCT"></script>
<script>
window.dataLayer = window.dataLayer || [];
...
gtag('config', 'UA-Something")
</script>
Two things are happening - we're instantiating the google tag manager script, and we're creating a dataLayer to access any analytics information.
The dataLayer
is good to note because we actually have access to this at anytime in our own code. We could push custom analytics events simply by adding an event to the dataLayer
array, such as window.dataLayer.push('generate_lead')
Tag Manager
If you're already using Tag Manager, you'll want to 1. Add a new config for GA4 and 2. update any custom events, converting them to GA4 configured events.
It's advised to keep both GA4 and UA tags running simultaneously for at least a year to confirm there's enough time for a smooth migration. Fortunately for us, it's easy to copy custom event tags and move them to a separate folder within tag manager.
Custom Event Considerations
Dimensions & Metrics
GA4 has two means of measuring custom events: as Dimensions or as Metrics. The difference is essentially that a dimension is a string value, while a metric is numeric.
More is available in Google's Docs.
Variables in Custom Events
Just as you had a way of piping variables into Category, Action, Label, and Value fields in UA, you can add them to your custom events in GA4.
GA4 has a bit more flexibility by allowing you to set event parameters. You can have an array of parameters with a name-value pair. So on form submit, you could have a "budget" name and a "{{budget}}" value on an event. As we alluded to above, you can provide this by manually pushing an event through your own site's JavaScript.
Resources
Analytics Mania has a couple of very thorough articles on migrating to GA4 and testing your custome events in Tag Manager.
Sustaining Creativity
I've been thinking about this a lot. I went from making music in a clearly defined community to a much more amorphous one. When walking a more individualist road after being solely communally based for so long, what's the guiding purpose?
So the question on my mind has really been this: what's the motive behind continuing to work in a creative discipline?
Nothing here is really a prescription. It's mostly me figuring it out as I go. I write a lot of "You"s in this, but really I mean "me, Chris Padilla." If any of this is helpful to you, dear reader, by all means take what works! If you have perspectives on this, drop me a line.
So here we go! Three different categories and motives for making stuff:
Personal Creativity
I like making stuff! Just doing it lights me up. The most fun is when it's a blank canvas and I'm just following my own interest. It's just for me because I'm only taking in what sounds resonate with me, what themes come to mind, and what tools I have to make a thing.
I still share because it's fun to do so! It contributes to the pride of having made something that didn't exist before. A shared memento from the engagement with the spirit of creativity. But, any benefit other people get from it is merely a side effect of the process. It's not the purpose.
An interesting nuance that is starting to settle in as I do this more and more — there is no arrival point here. Creativity is an infinite game with no winners and losers, just by playing you are getting the reward and benefits then and there. This alone is a really juicy benefit to staying creative. But maybe it's not quite enough —
Gifts
Creativity for other people. Coming from a considerate place, a genuine interest in serving the person on the other side of it. Often this feels like a little quest or challenge, because I'm tasked to use the tools and skills I have to help, entertain, or bring beauty to the audience on the other end.
I'm pretty lucky in that I've pretty much always done creative work for others that has also lead to getting paid for it. Even my current work in software engineering I consider gifts. Money is part of it, but the empathetic nature of building for a specific group of people makes it feel like a gift.
$$$
Sometimes, ya gotta do what ya gotta do. In some ways, this is what separates professionals from amateurs. Teaching the student that's a bit of extra work, learning a new technology because it's popular in the market, or drawing commissions.
(Again, on a motivation level, I don't have much in my life that falls into this category. I'm very, VERY lucky to be working in a field that is interesting, and I have a pretty direct feeling of that work being of service — that work being a gift. BUT I've been in positions before where some of my work was more for those dollars.)
Actually, Game Director Masahiro Sakurai of Nintendo fame talks about this. A professional does what's tasked in front of them, even if it's not what you'd initially find interesting or fun. Even video game dev has it's chores!
This type of work is not inherently sell-out-y. You can still find the joy in the work and you can still find the purpose behind it. Shifting to a gift mindset here helps. Be wary of doing anything purely for this chunk of the venn diagram with no overlap.
A classic musician's rule of thumb for taking on a gig: "It has to have at least two of these three things: 1. Pay well 2. Have great music 3. Work with great people."
The Gist: Watch your mindset.
There's a balance between gift giving and creating just for you, I've been finding.
Things we make for our own pure expression and curiosity does not need to be weighed down by the expectation of other people loving it or of it selling wildly well. The gift is in following your own creative curiosity. And that's great!
If you're ONLY making things for yourself, and you're not finding ways to serve other people, then you'll be isolated and not fully fulfilled by what you're doing. Finding ways to give creatively is the natural balance for that.
A side note: Go for things that involve a few people, IRL. Nothing quite beats joining someone's group to make music in person, teaching someone how to do what you do, or making a physical gift for someone special!
Creating a Newsletter Form in React
Twitter is in a spot, so it's time to turn to good ol' RSS feeds and email for keeping up with your favorite artists, developers, and friends!
We built one for our game. This is another case in which building forms are more interesting than you'd expect.
Component Set Up
To get things started, I've already built an API similar to the one outlined here in my Analytics and CORS post
There are ultimately three states for this simple form: Pre-submitting, success, and failure.
Here's the state that accounts for all of that:
// Newsletter.js
import React from 'react';
import styled from 'styled-components';
import { useState } from 'react';
import { signUpForNewsletter } from '../lib/util';
const defaultMessage = 'Enter your email address:';
const successMessage = 'Email submitted! Thank you for signing up!';
const Newsletter = () => {
const [emailValue, setEmailValue] = useState('');
const [message, setMessage] = useState(defaultMessage);
const [emailSuccess, setEmailSuccess] = useState(false);
. . .
};
We're holding the form value in our emailValue
state. message
is what is displayed above our input to either prompt them to fill the form, or inform them they succeeded. emailSuccess
is simply state that will adjust styling for our success message later.
Rendering Our Component
Here's is that state in action in our render method:
// Newsletter.js
return (
<StyledNewsletter onSubmit={handleSubmit}>
<label
htmlFor="email"
style={{ color: emailSuccess ? 'green' : 'inherit' }}
>
{message}
</label>
<input
type="email"
name="email"
id="email"
value={emailValue}
onChange={(e) => setEmailValue(e.currentTarget.value)}
/>
<button type="submit">Sign Up</button>
</StyledNewsletter>
);
Setting our input to email will give us some nice validation out of the box. I'm going against the current common practice by using inline styles here for simplicity.
Handling Submit
Let's take a look at what happens on submit:
// Newsletter.js
const handleSubmit = async (e) => {
e.preventDefault();
if (emailValue && isValidEmail(emailValue)) {
const newsletterRes = await signUpForNewsletter(emailValue);
if (newsletterRes) {
setEmailValue('');
setEmailSuccess(true);
setMessage(successMessage);
} else {
window.alert('Oops! Something went wrong!');
}
} else {
window.alert('Please provide a valid email');
}
};
The html form, even when we prevent the default submit action, actually still checks the email input against it's built in validation method. A great plus! I have a very simple isValidEmail
method in place just to double check.
Once we've verified everything looks with our inputs, on we go to sending our fetch request.
// util.js
export const signUpForNewsletter = (email) => {
const data = { email };
if (!email) console.error('No email provided', email);
return fetch('https://coolsite.app/api/email', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify(data),
})
.then((response) => response.json())
.then((data) => {
console.log('Success:', data);
return true;
})
.catch((error) => {
console.error('Error:', error);
return false;
});
};
I'm including return statements and a handler based on those return statements later with if (newsletterRes) ...
in our component. If it's unsuccessful, returning false will go into our very simple window.alert
error message. Else, we continue on to updating the state to render a success message!
Wrap Up
That covers all three states! Inputing, error, and success. This, in my mind, is the bare bones of getting an email form setup! Yet, there's already a lot of interesting wiring that goes into it.
From a design standpoint, a lot of next steps can be taken to build on top of this. From here, you can take a look at the API and handle an automated confirmation message, you can include an unsubscribe flow, and you can include a "name" field to personalize the email.
Even on the front end, a much more robust styling for the form can be put in place.
Maybe more follow up in the future. But for now, a nice sketch to get things started!
Here's the full component in action:
// Newsletter.js
import React from 'react';
import styled from 'styled-components';
import { useState } from 'react';
import { signUpForNewsletter } from '../lib/util';
const defaultMessage = 'Enter your email address:';
const successMessage = 'Email submitted! Thank you for signing up!';
const Newsletter = () => {
const [emailValue, setEmailValue] = useState('');
const [message, setMessage] = useState(defaultMessage);
const [emailSuccess, setEmailSuccess] = useState(false);
function isValidEmail(email) {
return /\S+@\S+\.\S+/.test(email);
}
const handleSubmit = async (e) => {
e.preventDefault();
if (emailValue && isValidEmail(emailValue)) {
const newsletterRes = await signUpForNewsletter(emailValue);
if (newsletterRes) {
setEmailValue('');
setEmailSuccess(true);
setMessage(successMessage);
} else {
window.alert('Oops! Something went wrong!');
}
} else {
window.alert('Please provide a valid email');
}
};
return (
<StyledNewsletter onSubmit={handleSubmit}>
<label
htmlFor="email"
style={{ color: emailSuccess ? 'green' : 'inherit' }}
>
{message}
</label>
<input
type="email"
name="email"
id="email"
value={emailValue}
onChange={(e) => setEmailValue(e.currentTarget.value)}
/>
<button type="submit">Sign Up</button>
</StyledNewsletter>
);
};
export default Newsletter;
const StyledNewsletter = styled.form`
display: flex;
flex-direction: column;
max-width: 400px;
font-family: inherit;
font-size: inherit;
padding: 1rem;
text-align: center;
align-items: center;
margin: 0 auto;
label {
margin: 1rem 0;
}
#email {
width: 80%;
padding: 0.5rem;
/* border: 1px solid #75ddc6;
outline: 3px solid #75ddc6; */
font-family: inherit;
font-size: inherit;
}
button[type='submit'] {
position: relative;
border-radius: 15px;
height: 60px;
display: flex;
-webkit-box-align: center;
align-items: center;
-webkit-box-pack: center;
justify-content: center;
padding: 2rem;
font-weight: bold;
font-size: 1.3em;
margin-top: 1rem;
background-color: var(--cream);
color: var(--brown-black);
border: 3px solid var(--brown-black);
transition: transform 0.2s ease;
text-transform: uppercase;
}
button:hover {
color: #34b3a5;
background-color: var(--cream);
border: 3px solid #34b3a5;
cursor: pointer;
}
`;
Building a Proxy with AWS Lambda Functions and CORS
For those times you just need a sip of backend, Lambda functions serve as a great proxy.
For my situation, I needed a way for a client to submit a form to an endpoint, use a proxy to access an API key through environment variables, and then submit to the appropriate API. The proxy is still holding onto sensitive data, so in lieu of storing an API key on the client (no good!), I'm using CORS to keep the endpoint secure.
Handling Pre-Flight Requests:
This article by Serverless is a nice starting place. Here are the key moments for setting up cors:
# serverless.yml
service: products-service
provider:
name: aws
runtime: nodejs6.10
functions:
getProduct:
handler: handler.getProduct
events:
- http:
path: product/{id}
method: get
cors: true # <-- CORS!
createProduct:
handler: handler.createProduct
events:
- http:
path: product
method: post
cors: true # <-- CORS!
The key config, cors: true
is a good start, but is the equivalent of setting our header to 'Access-Control-Allow-Origin': '*'
. Essentially, this opens our endpoint up to any origin. So we'll need to find a way to secure this to only a couple of urls.
Serverless here recommends handling multiple origins in the request itself:
// handler.js
const ALLOWED_ORIGINS = [
'https://myfirstorigin.com',
'https://mysecondorigin.com'
];
module.exports.getProduct = (event, context, callback) => {
const origin = event.headers.origin;
let headers;
if (ALLOWED_ORIGINS.includes(origin) {
headers: {
'Access-Control-Allow-Origin': origin,
'Access-Control-Allow-Credentials': true,
},
} else {
headers: {
'Access-Control-Allow-Origin': '*',
},
}
. . .
}
This alone would work fine for simple GET and POST requests, however, more complex requests will send a Preflight OPTIONS request. I am sending a POST request, but it would have to be an html form submission to qualify as "simple." Since I'm sending JSON, it's considered complex and a preflight request is sent.
A little more looking in serverless docs shows us how we can approve multiple origins for our preflight requests:
# serverless.yml
cors:
origins:
- http://www.example.com
- http://example2.com
Server response with Multiple Origins
When allowing multiple origins, the response needs to return a single origin in the header, matching the request origin. If we send a comma delineated string with all our origins, the response will not be accepted.
In our server code above, we did handled this with the logic below:
const origin = event.headers.origin;
let headers;
if (ALLOWED_ORIGINS.includes(origin) {
headers: {
'Access-Control-Allow-Origin': origin,
'Access-Control-Allow-Credentials': true,
},
}
We grab the origin from our request headers, match it with our approved list, and then send it back in the response headers.
Lambda & Lambda Proxy
To have access to our request headers, we need to ensure we are using the correct integration.
Lambda Proxy integration is the default with serverless and the one that will include the headers.
So why am I pointing this out?
Some Lambdas you work with may include integration: lambda
in their config file:
functions:
create:
handler: posts.create
events:
- http:
path: posts/create
method: post
integration: lambda
These are set to launch the function as Lambda integrations.
The general idea is that Lambda Proxy integrations are easier to set up while Lambda integrations offer a bit more control. The only extra bit of work required for Lambda proxy is handling your own status codes in the response message, as we did above. Lambda integrations may be more suitable in situations where you need to modify a request before sent to the lambda or a response after. (A really nice overview of the difference is available in this article.)
So, if you're setting up your own lambda, no need to do anything different to access the headers. If working with an already established set of APIs, keep an eye out for integration: lambda
. Accessing headers will take some extra considerations in that case.
Walt Stanchfield & Performing with No Audience
Switching from a performance art to a creating medium has been weird.
As a musician and teacher, the feedback loop was pretty tight. Performing on stage and playing in groups, there's a real magic to having other people in the room responding and reacting in real time and real space.
Even with teaching! Going into a lesson, students would improve noticeably on the spot, or laugh at my bad dad jokes right then and there.
Now I work in software. Don't get me wrong, I get great feedback! Though, it's a difference between publishing and performing.
Creatively instead of playing on stage, I write songs, draw on the couch, and largely play for a digital audience. Much of my creative work is published, not performed.
So I've been thinking about that a lot.
Walt Stanchfield
The late Walt Stanchfield, former Disney animator and teacher, knows what I'm talking about. The guy, on top of being a highly expressive teacher and artist, played concert piano, wrote poems, and was an enthusiastic tennis player.
Here he is talking about animation, though it's easy to see how he could be talking about any digital creative work:
Animation has a unique requirement in that its rewards are vaguely rewarding and at the same time frustrating. We are performers but our audience is hidden from us. We are actors but there is no applause. We are artists but our works are not framed and hung on walls for friends to see. We are sensitive people whose sensibility is judged across the world in dingy theaters by a sometimes popcorn eating audience. Yet we are called upon day by day to delve deep into our psyche and come up with fresh creative bits of entertaining fare. That requires a special kind of discipline and devotion, and enthusiasm. Our inner dialogue must be amply peppered with encouraging argument. We sometimes have to invent or create an audience in our minds to draw for.
Walt knows the curious position because he's been on both sides of this. Here he is talking about performing for a live audience:
I used to sing in operettas, concerts, etc., so I know what real applause is. It is heavenly. A living audience draws something extra out of the performer. A stage director once said to the cast of a play on the opening night, “You’ve had good equipment to work with: a theatre with everything it takes to put on a show. But you have been handicapped—one essential thing has been denied you. Tonight there’s an audience out there; now you have everything you need.”
So is there a solution to dealing with that missing piece? Is it just comparing apples and oranges? Walt recommends drumming up the empathy and imagination yourself, ultimately.
Well, we do have an awaiting audience out there. We’ll be denied the applause but at least there is a potential audience to perform for; one to keep in mind constantly as we day by day shape up our year dress rehearsal. Even as we struggle with the myriad difficulties of finalizing a picture—what is the phrase, “getting it in the can,” we can perform each act for that invisible or mystical audience. We can’t see our audience but it is real and it is something to work for.
So yes, a little bit of imagination.
He mentions it earlier, but devotion and enthusiasm has been the real key for me. I don't think I'd say I necessarily played music for the applause. The practice itself is what's energizing. I'm grateful that all of my disciplines have pretty great feedback loops. They're so physical, tactil, and expressive that the work is reward enough.
Sharing is really just a nice bonus, an artifact of the time well spent chasing a creative thread.
The whole essay is "A Bit of Introspection" from Gesture Drawing For Animation by Walt Stanchfield, handout made freely available, and published into a couple of nice books as well.
Iwata on What's Worth Doing
When it comes to answering the question "What's worth doing?", the internet can muddy it up a bit.
Plenty of good to the internet: Shared information, connecting with far flung people, and finding community.
And, it's also a utility that can deceive us into feeling infinite.
I was surprised to see Nintendo's former president Satoru Iwata wrestle with this in an interview he gave for Hobo Nikkan Itoi Shumbun that was published in the book "Ask Iwata."
"The internet also has a way of broadening your motivations. In the past, it was possible to live without knowing there were people out there who we might be able to help, but today, we're able to see more situations where we might be of service. But this doesn't mean we've shed the limitations on the time at our disposal.
...as a result, it's become more difficult than ever to determine how to spend the hours of the day without regret."
Wholly relatable. Very warm to see Iwata put this in terms of serving people. For creative folk, this could be anything from projects to pursue, audiences to reach, and relationships to develop. There's, I'm sure, an interesting intersection with another change in history — the ability to reproduce art.
"It's more about deciding where to direct your limited supply of time and energy. On a deeper level, I think this is about doing what you were born to do."
Less is the answer, and considering your unique position is what takes the place of overwhelming choice. What you were born to do can be a heavy question unto itself, but thinking of it as what you're in the unique position to do helps.
I'll paraphrase Miyazaki here: "I focus on only what's a few meters from me. Even more important than my films, that entertain countless children across the world, is making at least three children I see in a given day smile." Focusing on the physical space and your real, irl relationships, is likely to guide you towards what's worth doing.
The Gist on Authentication
Leaving notes here from a bit of a research session on the nuts-and-bolts of authentication.
There are cases where packages or frameworks handle this sort of thing. And just like anything with tech, knowing what's going on under the hood can help with when you need to consider custom solutions.
Sessions
The classic way of handling authentication. This approach is popular with server-rendered sites and apps.
Here, A user logs in with a username and password, the server cross references them in the DB, and handles the response. On success, a session is created, and a cookie is sent with a session id.
The "state" of sessions are stored in a cache or on the DB.
Session Cookies are the typical vehicles for this approach. They're stored on the client and automatically sent with any request to the appropriate server.
Pros
For this approach, it's nice that it's a passive process. Very easy to implement on the client. When state is stored in a cache of who's logged in, you have a more control if you need to remotely log a user out. Though, you have less control over the cookie that's stored in the client.
Cons
The lookup to your DB or cache can be timely here. You take a hit in performance on your requests.
Cookies are also more susceptible to Cross-Site Request Forgery (XSRF).
JWT's
Two points of distinction here: When talking about a session here, we're talking about that stored on the server, not session storage in the client.
Cookies hypothetically could be used to store a limited amount of data, but for JWT's typically need another method, since cookies have a small size limit.
Well, what are JWT's? Jason Web Tokens are a popular alternative to sessions and cookie based authentication.
On successful login, a JWT is returned with the response. It's then up to the client to store it for future requests, working in the same way as sessions here.
The major difference, though, is that the token is verified on the server through an algorithm, not by DB lookup of a particular ID. There's a major prop of JWT's! It's a stateless way of handling authentication.
Options for storing this on the client include local storage, indexedDB, and some would say, depending on the size of your token, cookies.
Pros
As mentioned, it's stateless. No need to maintain sessions in your cache or on your DB.
More user-related information can be stored with the token. Details on authorization level is common ("admin" vs "user" permissions.)
This approach is also flexible across platforms. You can use JWT's with mobile applications or, say, a smart TV application.
Cons
Because this approach is stateless, unfortunately you have limited control in logging out individual users remotely. It would require changing your entire algorithm, logging all of your users out.
Depending on how you store the token, there are security concerns here, too. It's best to avoid local storage, in particular, as you are open to XSRF - Cross site request forgery. If you accept custom inputs from users, beware also of XSS - Cross Site Scripting, where malicious code could be ran on your site.
Who Wins?
Depending on your situation, you may just need the ease of setup provided by session storage. For an API spanning multiple devices, JWT's may seem appealing. There is also the option to blend the approaches: Using JWT's while also storing session logic in a cache or DB.
Some handy libraries for implementing authentication includes Passport.js and auth0. For integrated authentication with Google, Facebook, etc., there's also OAuth2.0. A tangled conversation on it's own! And, addmitedly, one that's best implemented alongside a custom authentication feature, rather than as the only form of authentication.
An Overview of Developing Slack Shortcuts
For simple actions, sometimes you don't need a full on web form to accomplish something. An integration can do the trick. Slack makes it pretty easy to turn what could be a simple webform into an easy-to-use shortcut.
It's a bit of a dance to accomplish this, so this will be more of an overview than an in depth look at the code.
As an example, let's walk through how I'd create a Suggestion Box Shortcut.
Slack API
The first stop in setting any application up with Slack is at api.slack.com. Here we need to:
You'll create a callback ID that we'll save for later. Our's might be "suggestionbox"
Developing your API with Bolt
It's up to you how you do this! All slack needs is an endpoint to send a POST request. A dedicated server or serverless function works great here.
Here are the dance steps:
There are multiple steps because we'll receive multiple communications:
Shortcut opens => Our API fires up and sends the modal "view" for the shortcut.
User marks something on the form => Our API listens to the action and potentially update the view.
User submits the form => Our API handles the request and logs a success / fail message.
Bolt is used here to massively simplify this process. Without Bolt, the raw slack API uses http headers to manage the different interactions. With Bolt, it's all wrapped up neatly in an intuitive API.
Blocks
The UI components for slack are called blocks. There is a handy UI for creating forms and receiving the appropriate JSON in their documentation. Several great inputs are included, like multi select, drop down, date picker, and several other basic inputs that are analogous to their web counterparts.
Redux Growing Pains and React Query
AC: New Murder's announcement has been par for the course of a major release. Lots of good feedback and excitement, and some big bugs that can only be exposed out in the open.
The biggest one was a bit of a doozy. It's around how we're fetching data. The short version of an already short overview is this:
Naturally, something went wrong in between.
Querying Sanity
Sanity uses a GraphQL-esque querying language, GROQ, for data fetching. A request looks something like this:
`*[_type == 'animalImage']{
name,
"images": images[]{
emotion->{emotion},
"spriteUrl": sprite.asset->url
}
}`
Similar to GraphQL, you can query specifically what you need in one request. For our purposes, we wanted to store data in different hierarchies, so a mega-long query wasn't ideal. Instead, we have several small queries by document type like the animalImage
query above.
The Issue
On app load, roughly 5 requests are sent to Sanity. If it's a certain page with dialogue, 5 additional requests will be sent.
The problem: Not every request returned correctly.
This started happening with our beta testers. Unfortunately, there's not a ton of data to go off of. From what we could tell, everyone had stable internet connections, used modern browsers, and weren't using any blocking plugins.
My theory is that some requests may not be fulfilled due to the high volume of requests at once. I doubt it's because Sanity couldn't handle our piddly 10 requests. More likely, there could be a request limit. Here, I'm still surprised it would be as low as 10 within a certain timeframe.
Whatever the cause, we had an issue where API requests were failing, and we did not have a great way of handling it.
Contemplating Handling Errors
This project started 2 years ago when the trend for using Redux for all data storing was still pretty high. Things were starting to shift away as the project developed, but our architecture was already set.
There is potentially a Redux solution. Take a look at this Reducer:
function inventoryReducer(state = initialState, action) {
const { type, payload } = action;
switch (type) {
case 'GET_INVENTORY_ITEMS/fulfilled':
return { ...state, items: payload };
...
The "/fulfilled" portion does imply that we do log actions of different states. We could handle the case if it returns a failure, or even write code if a "/pending" request hasn't returned after a certain amount of time. Maybe even, SAY, fetch three times, then error out.
But, after doing all that, I would have essentially written React Query.
Incorporating React Query
It was time. A major refactor needed to take place.
So, at the start, the app is using Redux to fetch and store API data.
React Query can do both. But, rewiring the entire app would have been time consuming.
So, at the risk of some redundancy, I've refactored the application to fetch data with React Query and then also store the data in Redux. I get to keep all the redux boilerplate and piping, and we get a sturdier data fetching process. Huzzah!
Glueing React Query and Redux Together with Hooks
To make all of this happen, we need:
A tall order! We have to do this for 10 separate requests, after all.
After creating my actions, migrating GROQ into query methods, we need to make the glue.
I used a couple of hooks to make this happen.
import React, { useEffect } from 'react';
import { useQuery } from 'react-query';
import { useDispatch } from 'react-redux';
import { toast } from 'react-toastify';
export default function useQueryWithSaveToRedux(name, query, reduxAction) {
const dispatch = useDispatch();
const handleSanityFetchEffect = (data, error, loading, reduxAction) => {
if (error) {
throw new Error('Woops! Did not receive data from inventory', {
data,
error,
loading,
reduxAction,
});
}
if (!loading && !data) {
// handle missing data
toast(
"🚨 Hey! Something didn't load right. You might want to refresh the page!"
);
}
if (data) {
dispatch(reduxAction(data));
}
};
const { data, isLoading, error } = useQuery(name, query);
useEffect(() => {
handleSanityFetchEffect(data, error, isLoading, reduxAction);
}, [data, isLoading, error]);
return { data, isLoading, error };
}
useQueryWithSaveToRedux
takes in the query and redux action. We write out our useQuery
hook, and as the data
, isLoading
, and error
results are updated, we pass it to our handler to save the data. If something goes awry, we have a couple of ways of notifying the user.
These are then called within another hook - useFetchAppLevelData
.
export default function useFetchAppLevelData() {
const snotesQuery = useQueryWithSaveToRedux('sNotes', getSNotes, saveSNotes);
const picturesQuery = useQueryWithSaveToRedux(
'pictures',
getPictures,
savePictures
);
const spritesQuery = useQueryWithSaveToRedux(
'sprites',
getSprites,
saveSprites
...
return {
snotesQuery,
picturesQuery,
spritesQuery,
...
};
}
useFetchAppLevelData
is simply bringing all these hooks together so that I only need to call one hook in my component. It's mostly here to keep things tidy!
import useFetchAppLevelData from './hooks/useFetchAppLevelData';
function App() {
const location = useLocation();
const dispatch = useDispatch();
const fetchAppLevelDataRes = useFetchAppLevelData();
...
}
A big task, but a full refactor complete!
Writing Music
I had a surprisingly hard time starting up the practice of writing music. Lots of false starts were involved, a ton of back and forth on if I even really enjoyed doing it, and the classic moments of cringing at some of my first tunes.
In a lot of ways, music school _really__ helped me out with the skills and vocabulary needed to make songs.
But then, the unspoken emphasis on theory-driven music and "correctness" in music was a really difficult funk to shake loose.
So, this is advice for me-from-a-year-ago. Or, maybe it's for you! These are some things I've picked up wrestling in the mud. It's from the perspective of a performing musician switching gears to writing. Maybe it will help if that's you!
Playful Mindset
The meatiest part of getting into it is right here. It's gotta be fun!
Gradually over the course of going through school and mastering an instrument, I assumed that what was meaningful was hard. I was fortunate to have wildly supportive instructors. Never did my music school experience come close to the movie Whiplash, is what I'm saying!
But, still, systematically it's a competitive environment.
On the other side of school, creative practices have to be done with much more levity.
It helps that what I write is pretty silly! Take time to do things badly: Write the worst song ever on purpose. Accidentally write avant garde music. Write music to a silly prompt. Anything to get it moving!
Honestly, it's a lifestyle thing. Making time for your play: Doing things just for the fun of it, feeds into this as well.
There's a balance between finishing songs and always moving to what's most exciting. A balance between keeping a routine and letting enthusiasm guide you. That interplay is what keeps it exciting! Lean towards curiosity and interest as often as you can!
Being a Connector
Sometimes the ideas just come. Seemingly out of nowhere, after assimilating new techniques, sounds, and theory, it all just clicks!
These days are a rush when they happen! And they are few and far between.
In the meantime, I think taking the approach of a connector is really helpful.
Say you want to write a song as if Beethoven wrote Lo-Fi hip hop chill beats to study to.
You have two sounds to work with: Orchestral brilliance and a gentle beat.
Like a DJ, your job is to mix them so that they work together. DJ's only have tempo and keys to adjust. You, on the other hand, probably have a lot more tools at your disposal (Swapping chords, rhythm, tempo, new melody, instrumental texture, mood, etc.)
This is one of my favorite parts of the practice because it's SO JUICY! You get to break open and learn a little bit about what makes a certain artist, song, or style sound the way it sounds. There's some transcribing involved that's helpful here. Often times, the pieces that need connecting need some glue. Maybe even original material! So you are in fact writing something new, even if it's just a transition or a different bass line. At the end of all that learning, you have something new that's never existed before! Something complete that gave you lots of cool little tools for future-you writing future-music.
Use References
Expanding on the above point a bit, you shouldn't have any guilt around using refference.
Steal like an Artist! You could read a whole little book on it. I'll tell you now: Everyone is stealing something. Even if you're Jacob Collier, you're borrowing from genres, artists, and experimental theory ideas. We're all just riffing on the major scale, at the end of the day!!
Letting go of the weight of trying to be original helped me loosen up. Probably you're doing something original on accident, even if you're not trying. We all have such a unique collection of microscopic influences that have bent our ears and minds, it's bound to come through in what you make.
Transcribe
The best thing my general music classes gave me was just enough theory and ear training to transcribe. I also got a lot of weird hang ups about it, so I avoided it for a little while.
Some myth busting on using the tool of transcription:
Momentum is More Important Than Accuracy
Sometimes recordings are muddy, chords are dense, or a sound just isn't sitting in the ear. Move on! Find something that kind of matches the musical/emotional intent, and get back to writing. It would be a shame to let go of learning all the other juicy things about form, harmony, melody, and instrumentation just because it's hard to hear exactly what extensions were being used in a passage.
Know Enough Music Theory for the Major Tropes
In jazz, you have to know about the ii V I. In classical, the dominant to tonic. Knowing enough of the reoccurring themes in a genre makes transcribing easier, and you get to focus on the building blocks around it instead of dissecting a technique you probably could have found in a blog article somewhere.
Actually, blogs are great places to start with learning these, if it's a ubiquitous form like jazz.
Transcribing is a Learnable Skill
It's like anything. The more of it you do, the easier it is. Being reasonable with it at the start helps keep you moving. For example, maybe just start with the form of a song and then try to write something with the same form. Or focus on major harmonic points instead of every subtle chord shift. There's no test at the end of a transcription. So long as you're picking up a new technique and immersing yourself in a sound, you're learning what you need to from it.
Releasing Music and Overcoming Inertia
I have an arbitrary pacing for when I release music. It's broad enough where if I miss a day, it's no big deal, but frequent enough where it keeps my spirit magnetically pulled to always asking "What's next?"
I've tried a few out: "Write something everyday" was impossible. "Record one album this year" meant it was never going to happen. But having a regular interval somewhere in between those two kept me going.
Having to make it public, also, helps a lot with the accountability, even if no one is actively policing you on your schedule.
Follow Your Energy Through the Day
Classic productivity prescription. It clicked for me when I heard Dilbert's Scott Adams talk about it in his sort-of-autobiography. For him, writing happens in the morning, and rote-drawing happens in the evening.
Translating to writing: Actual melody/harmony production happens in the morning, edits and tightening up the quality happens in the evening. Or, most of the time in my case, I took the evenings to practice an instrument like guitar or piano. It doesn't take design-type thinking to practice a scale or play an exercise.
Keep a Collection of What You Like
Likes on Spotify, bookmarks in your web browser, whatever! I personally keep a plain text file called FavoriteMusic.md
where I copy in links, song titles, and notes on what I like about a song.
I have a list for album ideas. Some may never happen. But on the days where there's simply a blank canvas, both of these lists come in handy.
Make It Real
This might just be helpful to me, personally. If it's not under my fingers, it doesn't always feel very real. At the very least, it becomes too cerebral if it isn't.
Sometimes I find an idea while noodling on guitar. Or from playing sax. My favorite now is piano. Nothing beats it when it comes to visualizing harmony and getting used to thinking polyphonically.
Largely, keeping a part of the process tactile has helped. The day I got an electronic keyboard hooked up to my laptop as a midi input, the game changed.
Be In Motion
Any creative thing — music, art, blogs — is cool because, in my mind, it's a still image capture of something in motion. Like those photos with Long Exposure effects and Light Painting.
In other words - Don't worry about sitting down and not knowing what's going to come out. That's the fun part!! A dash of mystery and a pinch of romance on a day-to-day basis!
You learn from starting. Get something on the page. Then mold it. I think very few folks know exactly how something will go before they sit down to write it. It's a process. In fact, the process is what's so rewarding anyhow! It's a journey of discovery, making something. That's the point of it all in the end. Not to have made, but to be making.
Fonts and CLS
Fonts are a tricky space when accounting for CLS. They have the potential to not ping the CLS score too harshly. Though, if multiple element sizings are based on a web font loading, then it can add up. A nav bar plus a hero header plus breadcrumbs plus subtitle plus author name, it can all contribute to a larger score.
Current solutions are primarily hack-y. There are a few worth experimenting with, and a few coming down the pipe
Pure CSS Solution
The leading idea offered up is to use font-display: optional
. This will not load web fonts if they don't load in time. A great SEO solution, but not an ideal design solution.
CSS Font API
the CSS Font Loading API can be used to determine when the font has loaded, and then render the content to the page. document.fonts.onloadingdone
will accept a call back where we can switch styles from hidden to display: block
. In React, it could look something like this:
const TextComponent = () => {
const [fontLoaded, setFontLoaded] = useState(false)
useEffect(() => {
document.fonts.onloadingdone(() => setFontLoaded(true))
})
// fallback, render content after certain time elapsed
useEffect(() => {
setTimeout(() => setFontLoaded(true), 1000)
})
...
return (
<StyledTextComponent $fontLoaded={fontLoaded}>
...
</StyledTextComponent>
)
}
const StyledTextComponent = styled.ul`
display: ${props => props.$fontLoaded ? 'block' : 'none'};
...
`;
This is not an ideal solution for main page content. It wouldn't be ideal to have the content missing when SEO bots crawl your site for content. This would work great for asides, however.
Font Optimization
This article shares some interesting ideas on optimizing fonts so that they load before CLS is accounted for. Though, for some use cases these are heavy handed solutions. They include:
Font Descriptions
In the future 🪐 we'll see font descriptions coming to CSS. A great overview is here on Barry Pollard's Smashing Magazine Article. The gist is that we'll have more control over adjusting the size of fonts as they're swapped out to mitigate the shifting that comes from a differently sized font.
It's almost there, but will still take some time to fully bake.
Aggregation in MongoDB
Earlier I wrote on getting a quick bare-bones analytics feature running for a project.
Now that we're recording data, I want to take a look at actually analyzing what we save.
Knowing just enough about database aggregation goes a long way in providing insight into the data we're collecting! I'll dive into what things look like on the Mongodb side of things:
Data Model
My use case is pretty simple. All I need to know is how many users have played a game since it's release.
So, our data model is similarly simple. Here's what a log for starting the game looks like:
{
"_id": {
"$oid": "633eceff9b5e4de"
},
"date": {
"$date": {
"$numberLong": "1665060607623"
}
},
"type": "play"
}
type
here is the type of event that we're logging. "Play" marks the start of the game, "complete" when they finish, and a few in between.
Aggregation
When fetching the data, I want the Database to do the heavy lifting of sorting the documents and counting how many have played the game, finished it, and all the points in between. Mondodb's aggregation language makes this a really easy task:
const aggregation = [
{
// Find documents after a certain date
$match: {
date: {
$gte: new Date('Fri, 30 Sep 2022 01:15:01 GMT'),
},
},
},
// Count and group by type
{
$group: {
_id: '$type',
count: {
$sum: 1,
},
},
},
];
Here's what that returns (with fake data):
{
"play": 100000000, // Wishful thinking!
"start act 3": 136455,
"complete": 8535,
"start trial": 1364363
}
The $group
operator is pretty flexible. With a little more elbow grease, you could also aggregate counts from month to month, and display a very slick line chart. 📈
Coming back to the point from my last article, since we're measuring a game, the interaction is more important to us. This data is more reliable and so closely integrated with our application since it relies on actions that, more than likely, bots and crawlers won't engage with as often. It's still likely not a perfect representation, but it provides enough data to gauge impact and see where the bottlenecks are in the game flow.
Navigating NPM Package Changes
Subtitled: The Case of the Hidden API! 🔍
An interesting mystery came up with work recently.
The Gist: A library we were using started behaving strangely. After doing some digging, I found out it was because the devs quietly changed their API with a minor release. Here's the detective work that was involved in sniffing this info out:
The Problem
We use Keen-Slider for all of our sliding needs. I wrote a bit of custom code on top of the react library. We needed the slider to render part of a slide's text if the string length exceeded a certain amount. We could then show the rest with a "Read More" button.
One day, out of the blue, this behavior started to act wonky. And it had something to do with a set of options I passed in.
Looking back to September 2021
A completely separate bug came up as I was developing this "Read More" feature back in September. To make a long story short, I ended up finding a solution with this issue ticket on Keen-Slider's Github Repo. The main fix being setting a specific option:
the
autoAdjustSlidesPerView
property is not described in the doc. You need to set it to false.
autoAdjustSlidesPerView
is indeed nowhere to be found in the docs, but it was in the codebase at the time and solved my problem like a charm. So I tossed it into our solution as well.
All was well worth the world.
Quiet Deprecation
That is UNTIL we fast forward to where we left off in the story!
I get word that the source is a set of options, including autoAdjustSlidesPerView
.
My first thought was "Ok, let me see if I can understand exactly what autoAdjustSlidesPerView
is doing." So I look to the docs to find out.
But they're not in the docs.
No sweat! Let me just look at the source code.
But there's no sign of it in Keen-Slider's source code.
Looking back at the issue ticket, an update came in a few months after from the library's maintainer:
This problem is gone with the new major version.
As simple as that. No mention of the autoAdjustSlidesPerView
.
But I realized something: More than likely, that change likely included removing autoAdjustSlidesPerView
from the API.
It was never in the docs, just a secret fix for particular edge cases. It makes sense why there would be no word about it.
The comment above states the change was with a major version update, but it was likely taken out with minor patches.
(An aside on versioning: our NPM versioning for this package is with the carrot ^5.3.0
. This means we wouldn't have automatically bumped up to the major version mentioned. Hence why I believe the change likely happened incrementally with a minor version change.)
Informed Changes with GitHub
Ok! Problem sourced. So we just need to take the option out, maybe write some new custom logic to handle the original problem, and be on our way.
Our company is not on a scale that justifies writing out extensive tests, so I didn't have that to fall back on when I pulled out the autoAdjustSlidesPerView
option. I had to make sure myself I wasn't undoing what ever I was trying to fix in the first place back in September!
Thankfully, Git and GitHub make it incredibly easy to source exactly when I added this line of code. Good git hygiene on my part informed me why I made the change.
At the time of adding the code, I was committing code frequently. My methodology: If I could explain what changed in one sentence, I have a commit. If it takes two sentences, then it's two commits and I need to commit more frequently.
Here's the commit message for when I added autoAdjustSlidesPerView
into the codebase:
Correct forced slider view in reviews widget when slidesPerView matches number of cards.
Great! I know exactly what I was trying to solve with this code, and I can see all the relating code around the change.
With all of this information, I was able to ensure both the bug from September and this new bug was fixed!
HTML Form Validation is Pretty Good!
After spending a fair chunk of time working in forms the React way, I've gotta say — we already get a lot of goodies with the basic HTML inputs.
There's a project I worked on recently that had me working in Vanilla JavaScript. No React, no libraries, just raw HTML, JS, and CSS.
I went in mentally preparing to have to write my own library. I was prepped to recreate Formik for this project. But I didn't really have to!
Here are some of the niceties that saved me a ton of time:
Input Pattern Attribute
Without needing to write any JS, you can check to ensure a text input matches a regex pattern. Here's an example from w3schools:
<form action="/action_page.php">
<label for="country_code">Country code:</label>
<input type="text" id="country_code" name="country_code"
pattern="[A-Za-z]{3}" title="Three letter country code"><br><br>
<input type="submit">
</form>
The title
attribute is the message that shows when there is a discrepancy between the input value and your pattern
regex.
Constrain Validation API
That may not cover all use cases, but modern browsers come with an API for further validation customization.
The Constrain Validation API is available on most form inputs. There are a couple of methods that are useful here: setCustomValidity()
and reportValidity()
setCustomValidity
allows us to set a custom error message. reportValidity
will then show the message when we call it on an element.
When we get to handling form submission, these give us a way of still working with the browser's built in UI and form API.
const handleSubmit = (e) => {
// We use our custom name validation here
const nameElm = document.querySelector('#contact-name');
const isNameValid = validateName(nameElm.value);
// And integrate with the API here
if (isNameValid) {
// If valid, we set the message to an empty string, meaning it passes.
nameElm.setCustomValidity('');
} else {
// If invalid, setting an error message will mark the input as invalid.
// Report Validity then shows the message.
nameElm.setCustomValidity('Please enter a valid phone number');
nameElm.reportValidity();
}
}
Bonus: The CSS psuedo-classes are available when working with the form in this way. We can still make of use CSS such as this:
input:invalid {
box-shadow: 0 0 5px 1px red;
}
input:focus:invalid {
box-shadow: none;
}
More details and examples are available on the MDN article for HTML form validation.