Introducing Mega Nap!

For the love of apps!

Yes this is a post about an amazing web application I helped create but first…

I finished coding school!

I graduated on June 26 with a Certificate of Training in FullStack JavaScript from Alchemy Code Lab in beautiful downtown Portland, Oregon. The program kicked my ass and there were a few moments when I wasn’t sure I was going to make it, but I worked really hard and came out of the program confident, capable, and full of great ideas and the programming chops to make them a reality. A huge thank you to Ryan for being a great instructor, Paige, Ryan, Easton, and Mack for being great TAs, Shannon for teaching us how to prepare for our job searches, and Marty and Megan for running such a great program.

What a fine looking group of alums!

Okay, on to the fun stuff!

I’m thrilled to present Mega Nap, the easiest way to make a full stack application without having to write a lick of backend code.

Mega Nap was my final project at Alchemy and was created by myself and four other students: Emily Baier (@hellotaco), Chris Piccaro (@picassospaint), Marty Martinez (@TDDandDragons), and Tommy Tran (@TranTTommy). We built it in just six days using an agile development process involving user stories, story point estimation, mini-sprints, and daily retrospectives. It was an incredibly balanced team and we worked really well together.

What Is Mega Nap?

Mega Nap is a web application that allows frontend developers to create a database, design database models, populate their new database, and receive RESTful API endpoints they can ping to access their data. All of this is done via a few simple web forms, so they can quickly and easily create and use API endpoints without having to write any backend code.

The inspiration for this project came from working almost exclusively with the Pokemon API while learning to fetch from third party data sources in React/Redux applications. Now don’t get me wrong, that API is legit. However, using the same data over in over our apps was getting boring, so we decided to create a tool frontend developers or those new to programming could use to make their own APIs. We brainstormed features, divvied up the work, and Mega Nap was born!


The Client

The Mega Nap client is build with React/Redux and deployed to Netlify. We used Auth0 for user account creation and authentication and styled components in lieu of plain CSS for most of the styling. We ran unit and snapshot tests in Jest and used Redux promise middleware for handling promises in our API fetch services.

One particularly tricky part of the front end was the data entry form the user fills out after creating their database models. We needed a form with fields that were dynamically generated based on the name and type the user had just set as the key/value pairs in their database model. To accomplish this we had to create an array of fields by running Object.entries on the parsed model schema JSON object we got from our server after the model was created. We then passed this to our form component, which mapped through that array and created a list of fields by running each array item value through a switch and returning the correct JSX form label and input based on the field type. We then rendered the list of returned labels/inputs, allowing the user to immediately enter their data!


An example of a dynamically-generated data upload form based on a “Dog” model.

The Server

The Mega Nap server is built with Node.js and Express, is deployed to Heroku, and uses MongoDB for data storage. We used the jwks-rsa auth0 npm package to create middleware that ensures authentication and only needed to create two database models: the Database, which is used to create a new database for each of the user’s models, and the Model, which has a schema value that holds all the user’s inputted model values. We used Cloudinary for uploading and storing images, so our users can upload images and receive image URIs back in their API calls and we don’t have to waste precious database space on storing their images.

We create each new model schema by using the reviver function, which takes the key/value pairs entered by our user as field names and input types, and then runs them through a switch and is passed as the second argument in creating a new Schema using Mongoose. We intentionally restricted the data types the user could use in their models to strings, numbers, and booleans, in order to keep our database super flat and not have to worry about models referencing or being dependent on other models. This allowed us to maintain a very flat, two-level database structure, with each user model being it’s own collection and all data being added as single documents in their appropriate model collections.

Each user’s model gets their own collection in our MongoDB database.
The data uploaded for each model is stored as an individual document in its appropriate collection.

The Look and Feel

We knew from the beginning that we wanted the user experience to be as painless and fun as possible. To achieve this we chose a modern, earthy-yet-energetic color scheme, using the Color Marketing Group’s prediction for 2020 Color of the Year, Neo Mint, as our primary color and combining it with cooler neutrals and one pop of vibrant pink for contrast.

I designed the homepage based on wirefames we worked on together and using modified iconography found on FlatIcon.com, trying to create an aesthetic that spoke to the fun, simple experience we wanted the user to have on our site. Emily and I styled the site together, with me handling most of the static or global components and her working on form styling and transitions. This was the first time either of us had really used styled components, so we not only had to figure out our styling in just a few days, we also had to learn a new styling language to do it.

An earlier version of the logo and word mark. We liked it, but it was too difficult to incorporate into a web design.

Next Steps

We’re all really proud of this project and are planning on making improvements to it as our individual schedules allow. My contribution to the future of the site is to make a mobile version of it using React Native. I’ve played around a bit with React Native and am excited to dive deeper into the documentation and begin building our mobile version soon.


Thanks so much for reading about our humble little web app! I’m really proud of what we were able to accomplish in under a single work week and hope you have fun creating API endpoints using it.


Until next time friends, here codes nothing!

Say Hello to Robot Haikubot!

The best automated Twitter account on the planet.

Last week my project team made something I’m particularly proud of: Robot Haikubot! 

Robot Haikubot combined our interests in poetry, data aggregation, sentiment analysis, automation, and social interaction into a Twitter bot that was fun to make and is even more fun to use.

I am a robot
Created by students at
Alchemy Code Lab

What It Does

Robot Haikubot will tweet you a randomly-generated haiku when you tweet at it; e.g., “Hey @RobotHaikubot, you up?”

You can also include either a #positive#negative, or #neutral hashtag in your tweet and get back a haiku with that sentiment; e.g., “Hey @RobotHaikubot, I would love a #positive haiku please!”.

Finally, you can add to our database of five-syllable and seven-syllable lines by adding the hashtag #five or #seven to your line; e.g., “@RobotHaikubot #five Get me a glass, please”. And don’t worry about adding erroneous lines to the db. Robot Haikubot uses syllable counting to validate if a line being added has five or seven syllables. If it doesn’t, the line won’t be added — no harm, no foul!


How It’s Built

Robot Haikubot is built using Node.js, MongoDB, and Express for data management and manipulation and the Twit, Sentiment, and Syllable npm packages for accessing the Twitter API and checking syllable count and sentiment. We deployed the final app to Heroku.

For syllable counting, we wrote a function using the Syllable npm package and imported that into a syllable-count middleware that validates the syllables in a line and sends the request to the correct route if valid or an error message if not.

For sentiment, we wrote a function using the Sentiment npm package that maps through a valid five or seven line and assigns a composite sentiment score to the instance of that line model before storing it in the database. This allowed us to take sentiment-specific get requests from the user when asking for a haiku.

Finally, the Twit npm package allowed us to open a stream from our bot account and listen for tweet events that mention our bot’s name and then tweet a haiku to the user making that request.

(Fun fact: don’t tweet your bot’s name from the account you have programmed to listen for that name and reply to unless you want an infinite loop of your bot listening for and tweeting to itself!)

Make Requests via Postman

Want to interact with Robot Haikubot but don’t have a Twitter account? You can make requests directly to the API via Postman by getting from and posting to these routes:


Contribute to Robot Haikubot!

My team did a ton of work to get Robot Haikubot up and running last week but there are still a handful of stretch goals we didn’t get to that we’d love some help with! If you’re comfortable with Node, MongoDB, Express, Mongoose, or just want to play around with our code, grab one of the open tickets here and have a go! For testing, you will need to include a .env file in your root and populate it with your version of the following keys:

 //credentials for accessing and authenticating mongodb
 MONGODB_URI=your-key-here
 AUTH_SECRET=your-key-here 

 //twitter credentials//
 CONSUMER_KEY='your-key-here'
 CONSUMER_SECRET='your-key-here'
 ACCESS_TOKEN='your-key-here'
 ACCESS_TOKEN_SECRET='your-key-here'
 TWITTER_SECRET='your-key-here'
 
 //base url for routes/twit API//
 BASE_URL=http://localhost:your-preferred-port
 
 //admin credentials for authenticating fives and sevens//
 ADMIN_LOGIN='your-key-here' 

Finally, don’t forget to follow my brilliant project partners on Twitter:


Thank you for reading.
Now, log on to Twitter and
have fun with our bot!

Let’s Catch-Up!

Absence makes the blog grow fonder.

Well hello there stranger! It’s been a minute, hasn’t it? Apologies for not posting anything the past SIX WEEKS (eep!), but Bootcamp II was a rush, then I went to New York for spring break (pics below!), then we started Career Track. I know you’re eager to read all about my progress over the past month-and-a-half, so without any more fuss or delay, here’s…

What I learned the Past Six Weeks: A Brief Recap

  • Week 7: APIs and serverless data storage
    • This week involved learning how to fetch data from third-party APIs, sort, filter, and paginate the results, and use Firebase to authenticate and save users. The main project I made this week was a Candidate Tracker that allows users to up-vote their favorite Democratic 2020 primary candidates during debates. GitHub repo here, deployed site here.
  • Week 8: Final Project Week (see Code In Action, below)
    • My team’s final project for Bootcamp II was a gif-based translation and guessing game using the Giphy API. See below for a more detailed description and a link to the deployed site.
  • Week 9: Spring Break: I went to NYC!
    • This was my first time in New York and I had a blast. I went to the Met, saw Sleep No More, hung out in Central Park, sang karaoke at Stonewall, bookstore-hopped in Brooklyn, and got pizza at 3 AM. Scroll to the bottom for some highlight pics.
  • Week 10: Node Fundamentals: Backend Stuff, Binary, Buffers
    • The first few weeks of Career Track really kicked my ass. This is the first time I’ve worked on the backend and I found it very hard to pick up the concepts. This week we were introduced to Node.js, binary, buffers, bitmaps (the hardest fucking thing I’ve ever tried to learn), destructuring, arrow functions, callbacks and asynchronous code, creating a local database, using Jest to test in the terminal, and installing project dependencies. It was a lot to come back to after a week in New York.
  • Week 11: Server Fundamentals Using Node
    • The second week of Career Track was as hard as the first, if not more so. This week we got into creating our first Node servers, learned about promises, (attempted) to make a chat app, started pinging APIs from the backend, started working with Express, learned about middleware, learned about Big O, and had to write our own functions that performed the same actions as .pop() and .push() without using any existing array or string methods, only loops and indexes. Oy.
  • Week 12: Express and MongoDB
    • Last week was a little easier, mostly because we started using Express and Mongoose to help with server creation and middleware. We also were introduced to MongoDB and Postman and deployed our first apps to Heroku! It was a week of bringing vague backend concepts out of the shadows and seeing how they work together in a much more user-focused way.

Code In Action

My final project for Bootcamp II is my favorite thing I’ve made in the course to-date. I really enjoyed my team, we kept on pace for the entire week, we worked through issues calmly when we were starting to get irritated with the project and each other, and we finished in time to add in a couple nice-to-haves to the final product. Check it out:

Talk Giphy to Me
GitHub Repo here | Live Site Here

  • What it is: A web app that uses the Giphy API to allow users to translate a message into a series of gifs, play a hangman-style guessing game based on a random gif, and save their favorite gifs to a page for later viewing.
  • What it demonstrates: Fetching from a third-party API, promises and asynchronous programming, pagination, sorting and filtering of data.
  • My main takeaways: I learned a lot about asynchronous functions, array methods, the order and placement of event listeners, different ways to manipulate the Firebase database, and how not to go about styling a group project (pro-tip: don’t do it as a group!). I also got a lot more comfortable with Flexbox and learned about the CSS ‘Computed’ inspection tool in Chrome. 

Closing Thoughts

I know I mentioned this in the last post, but future posts will stray from the Week-In-Review format. I’ve got two alternate format posts in the cooker already: one on how to deal with burnout, and the other on how to hash a new user’s password using virtuals and hooks in Mongoose. You can barely stand the wait!

Also, earlier I promised some pics from my trip to New York, so here ya go.

Until next week friends, here codes nothing!

Feature Photo by Bewakoof.com Official on Unsplash