Introducing Mega Nap!

For the love of apps!

Yes this is a post about an amazing web application I helped create but first…

I finished coding school!

I graduated on June 26 with a Certificate of Training in FullStack JavaScript from Alchemy Code Lab in beautiful downtown Portland, Oregon. The program kicked my ass and there were a few moments when I wasn’t sure I was going to make it, but I worked really hard and came out of the program confident, capable, and full of great ideas and the programming chops to make them a reality. A huge thank you to Ryan for being a great instructor, Paige, Ryan, Easton, and Mack for being great TAs, Shannon for teaching us how to prepare for our job searches, and Marty and Megan for running such a great program.

What a fine looking group of alums!

Okay, on to the fun stuff!

I’m thrilled to present Mega Nap, the easiest way to make a full stack application without having to write a lick of backend code.

Mega Nap was my final project at Alchemy and was created by myself and four other students: Emily Baier (@hellotaco), Chris Piccaro (@picassospaint), Marty Martinez (@TDDandDragons), and Tommy Tran (@TranTTommy). We built it in just six days using an agile development process involving user stories, story point estimation, mini-sprints, and daily retrospectives. It was an incredibly balanced team and we worked really well together.

What Is Mega Nap?

Mega Nap is a web application that allows frontend developers to create a database, design database models, populate their new database, and receive RESTful API endpoints they can ping to access their data. All of this is done via a few simple web forms, so they can quickly and easily create and use API endpoints without having to write any backend code.

The inspiration for this project came from working almost exclusively with the Pokemon API while learning to fetch from third party data sources in React/Redux applications. Now don’t get me wrong, that API is legit. However, using the same data over in over our apps was getting boring, so we decided to create a tool frontend developers or those new to programming could use to make their own APIs. We brainstormed features, divvied up the work, and Mega Nap was born!


The Client

The Mega Nap client is build with React/Redux and deployed to Netlify. We used Auth0 for user account creation and authentication and styled components in lieu of plain CSS for most of the styling. We ran unit and snapshot tests in Jest and used Redux promise middleware for handling promises in our API fetch services.

One particularly tricky part of the front end was the data entry form the user fills out after creating their database models. We needed a form with fields that were dynamically generated based on the name and type the user had just set as the key/value pairs in their database model. To accomplish this we had to create an array of fields by running Object.entries on the parsed model schema JSON object we got from our server after the model was created. We then passed this to our form component, which mapped through that array and created a list of fields by running each array item value through a switch and returning the correct JSX form label and input based on the field type. We then rendered the list of returned labels/inputs, allowing the user to immediately enter their data!


An example of a dynamically-generated data upload form based on a “Dog” model.

The Server

The Mega Nap server is built with Node.js and Express, is deployed to Heroku, and uses MongoDB for data storage. We used the jwks-rsa auth0 npm package to create middleware that ensures authentication and only needed to create two database models: the Database, which is used to create a new database for each of the user’s models, and the Model, which has a schema value that holds all the user’s inputted model values. We used Cloudinary for uploading and storing images, so our users can upload images and receive image URIs back in their API calls and we don’t have to waste precious database space on storing their images.

We create each new model schema by using the reviver function, which takes the key/value pairs entered by our user as field names and input types, and then runs them through a switch and is passed as the second argument in creating a new Schema using Mongoose. We intentionally restricted the data types the user could use in their models to strings, numbers, and booleans, in order to keep our database super flat and not have to worry about models referencing or being dependent on other models. This allowed us to maintain a very flat, two-level database structure, with each user model being it’s own collection and all data being added as single documents in their appropriate model collections.

Each user’s model gets their own collection in our MongoDB database.
The data uploaded for each model is stored as an individual document in its appropriate collection.

The Look and Feel

We knew from the beginning that we wanted the user experience to be as painless and fun as possible. To achieve this we chose a modern, earthy-yet-energetic color scheme, using the Color Marketing Group’s prediction for 2020 Color of the Year, Neo Mint, as our primary color and combining it with cooler neutrals and one pop of vibrant pink for contrast.

I designed the homepage based on wirefames we worked on together and using modified iconography found on FlatIcon.com, trying to create an aesthetic that spoke to the fun, simple experience we wanted the user to have on our site. Emily and I styled the site together, with me handling most of the static or global components and her working on form styling and transitions. This was the first time either of us had really used styled components, so we not only had to figure out our styling in just a few days, we also had to learn a new styling language to do it.

An earlier version of the logo and word mark. We liked it, but it was too difficult to incorporate into a web design.

Next Steps

We’re all really proud of this project and are planning on making improvements to it as our individual schedules allow. My contribution to the future of the site is to make a mobile version of it using React Native. I’ve played around a bit with React Native and am excited to dive deeper into the documentation and begin building our mobile version soon.


Thanks so much for reading about our humble little web app! I’m really proud of what we were able to accomplish in under a single work week and hope you have fun creating API endpoints using it.


Until next time friends, here codes nothing!

Say Hello to Robot Haikubot!

The best automated Twitter account on the planet.

Last week my project team made something I’m particularly proud of: Robot Haikubot! 

Robot Haikubot combined our interests in poetry, data aggregation, sentiment analysis, automation, and social interaction into a Twitter bot that was fun to make and is even more fun to use.

I am a robot
Created by students at
Alchemy Code Lab

What It Does

Robot Haikubot will tweet you a randomly-generated haiku when you tweet at it; e.g., “Hey @RobotHaikubot, you up?”

You can also include either a #positive#negative, or #neutral hashtag in your tweet and get back a haiku with that sentiment; e.g., “Hey @RobotHaikubot, I would love a #positive haiku please!”.

Finally, you can add to our database of five-syllable and seven-syllable lines by adding the hashtag #five or #seven to your line; e.g., “@RobotHaikubot #five Get me a glass, please”. And don’t worry about adding erroneous lines to the db. Robot Haikubot uses syllable counting to validate if a line being added has five or seven syllables. If it doesn’t, the line won’t be added — no harm, no foul!


How It’s Built

Robot Haikubot is built using Node.js, MongoDB, and Express for data management and manipulation and the Twit, Sentiment, and Syllable npm packages for accessing the Twitter API and checking syllable count and sentiment. We deployed the final app to Heroku.

For syllable counting, we wrote a function using the Syllable npm package and imported that into a syllable-count middleware that validates the syllables in a line and sends the request to the correct route if valid or an error message if not.

For sentiment, we wrote a function using the Sentiment npm package that maps through a valid five or seven line and assigns a composite sentiment score to the instance of that line model before storing it in the database. This allowed us to take sentiment-specific get requests from the user when asking for a haiku.

Finally, the Twit npm package allowed us to open a stream from our bot account and listen for tweet events that mention our bot’s name and then tweet a haiku to the user making that request.

(Fun fact: don’t tweet your bot’s name from the account you have programmed to listen for that name and reply to unless you want an infinite loop of your bot listening for and tweeting to itself!)

Make Requests via Postman

Want to interact with Robot Haikubot but don’t have a Twitter account? You can make requests directly to the API via Postman by getting from and posting to these routes:


Contribute to Robot Haikubot!

My team did a ton of work to get Robot Haikubot up and running last week but there are still a handful of stretch goals we didn’t get to that we’d love some help with! If you’re comfortable with Node, MongoDB, Express, Mongoose, or just want to play around with our code, grab one of the open tickets here and have a go! For testing, you will need to include a .env file in your root and populate it with your version of the following keys:

 //credentials for accessing and authenticating mongodb
 MONGODB_URI=your-key-here
 AUTH_SECRET=your-key-here 

 //twitter credentials//
 CONSUMER_KEY='your-key-here'
 CONSUMER_SECRET='your-key-here'
 ACCESS_TOKEN='your-key-here'
 ACCESS_TOKEN_SECRET='your-key-here'
 TWITTER_SECRET='your-key-here'
 
 //base url for routes/twit API//
 BASE_URL=http://localhost:your-preferred-port
 
 //admin credentials for authenticating fives and sevens//
 ADMIN_LOGIN='your-key-here' 

Finally, don’t forget to follow my brilliant project partners on Twitter:


Thank you for reading.
Now, log on to Twitter and
have fun with our bot!

Week 5 & 6: Bootcamp II Begins!

Yes of course I’m going to make ‘fetch’ jokes geez.


Okay, I’m gonna be real with y’all for a minute.

These weekly recap posts are starting to get a little tiresome.

“But Ben, we love your weekly recaps! They’re so informative and funny and full of GIFs of handsome mens!”

Look. I hear you. I get it. I’m not saying I’m going to stop writing them, or that I’m not getting some recap goodness out of them myself. I just want to make it clear that I’m getting a teensy bit bored with the format and will be exploring other types of posts in the future (IN ADDITION TO THE RECAPS CALM DOWN). I’ve got a number of good ideas cooking already, including posts about how coding is teaching me to be a better fiction reader, how AI might change what it means to be a developer, and the difference between being able to write code and being able to read it. So if you, like me, are growing tired of this format, have no fear. Hot new topics are heading your way soon. In the meantime, let’s dive in to what I learned the past two weeks!

We kicked Bootcamp II off with a bang, and by bang I mean a head-first dive into asynchronous programming. Here’s what we covered the past two weeks:

  • HTML templates
  • Using .forEach on arrays
  • Named imports and exports
  • Arrow functions
  • Callbacks
  • Sorting, slicing, and paginating data
  • Asynchronous functions and promises
  • (Making) fetch (happen) and accessing/using third party API data

Code in Action

I have two projects to show you this week. They’re both fairly similar, but I’m showing both because I’m proud of the progress I made between building the two. The News Search app, which I built first, was my first solo attempt at fetching and paginating data from an external API, and it took a lot of note-referencing and Googling in order to build it. The NASA GeneLab Search app was my second attempt, and I was able to bang that out in a few hours Friday afternoon. That build was actually the first time I was able to put on music, open up VSCode, and get into a sort of coding-trance that I’d heard developer friends talk about before (thanks Bey!). It was weird and awesome, and I’m proud that I was able to reach that state from my previous project frustration in just a few days.

News Search App
GitHub Repo Here | Live Site Here

  • What it is: A search site that pings NewsAPI.org to get results from over 30,000 news sites and blogs.
  • What it demonstrates: Fetching from a third-party API, promises and asynchronous programming, pagination, sorting and filtering of data.
  • Hardest part: Remembering all steps involved: capturing a user’s search input, creating a hash query to track the search and the page, listening for a change on that hash, using the hashchange to trigger the API fetch, and paginating the results. Oh, and styling of course (OF COURSE!).
  • Easiest part: Accessing and displaying the fetch results. The NewsAPI sends back a pretty reasonable object from a fetch, so it was easy to access the data I needed and place it where I wanted it.

NASA GeneLab Search App
GitHub Repo Here | Live Site Here

  • What it is: Another search app, this one working with NASA’s GeneLab database.
  • What it demonstrates: Same as above: API fetching, manipulating and displaying results data, hashchange events, etc.
  • Hardest part: Unlike the NewsAPI results, the objects NASA was sending back were layered and complex (go fig). This made it difficult to figure out how to correctly name the interpolation placeholders in my template literal sections of my list-building function.
  • Easiest part: Surprisingly, styling was pretty easy this time around. I tried to mimic the NASA GeneLab site itself, which was laid out in a very straightforward way, so I didn’t have to bang my head against the wall trying to figure out how to make it look good. Turns out NASA scientists aren’t too concerned with snazzy CSS effects. Who would’ve thought?

Closing Thoughts

I don’t have any significant insight about struggles or successes to share this week, but I will say that I’m excited to start working with concepts that are more nuanced and complicated and tools that allows me to participate in the larger ecosystem that is this World Wide Web I’ve been hearing so much about. This coming week we’re going to start to work with cloud-based databases and (eep!) backend stuff, so I’m very stoked about what we’ll be building in the next few days.

I’ll leave you tonight with an adorable story from this weekend: I was babysitting my son (sperm-donor daddy here, he has two moms, we’re a Big Gay Family, email me with questions) and he couldn’t stop talking about Pokemon. Well, last week we just so happened to build a Pokedex to practice pagination and sorting data, and when I showed it to him he was mesmerized. I asked him if he’d like me to build it out more so he could use it to look up information about any Pokemon and he said, “Yes! And of course you want to start with Pikachu. But after that I think it’ll probably take three… no, four… no, five weeks to add all the other Pokemon. So I’ll look forward to it in five weeks!”.

Honestly one of the most reasonable client requests I’ve heard.

He was shocked that my MacBook wasn’t a touchscreen.

It was a nice reminder that what I’m learning in class isn’t all theoretical. There are real people in the world who value these skills and get excited about the things I could potentially build for them, even if the end product is still five weeks out. That’s a nice feeling. 🙂

So until next week friends, here codes nothing!

Feature Photo by Daniel Lee on Unsplash