Hackference 2018
Hackference is always an event that I love to attend - it's got a great lineup, a wide range of attendees from different backgrounds, and the hack is the most relaxed that I've found (aside from the craziness that was BanterHack).
Hackference 2018 was no exception - there were some really interesting conversations and I got some exposure to some new stuff I don't usually play around with.
Deno: what it is?
Martin started the talk by asking if the audience remembered the countless typosquatting issues in the NPM registry or the leftpad incident. These came about with misuse of the NPM public registry, rather than just with NPM the package manager.
Martin talked about how Deno is a project that has been created in order to improve on Node.JS and remove some of these issues. Brought to us by the creator of Node.JS, Ryan Dahl, who's had time to consider his regrets around Node, and how he'd fix them in the future.
Deno is built to not require a package manager (including no package.json
or node_modules
), but instead manage includes through the language itself, referring to URLs using the ES2015 keyword import
.
One concern I have around this is working in i.e. an enterprise software house, where there are rules around dependency management for caching purposes, as well as to help manage security issues and Open Source Software intake.
Martin mentioned that Deno caches the resources itself, but can be forced to reload if needed.
import { test } from "https://unpkg.com/deno_testing/testing.ts"
import { log } from "./util.js"
Library creators will need to either support both Node and Deno, or they will need to remove Node-specific code from their packages, as Deno isn't going to be built to be necessarily interoperable with Node.
There is also the thought about being able to enforce versioning through URLs, as such, which is heavily inspired by Go:
import { test } from "https://unpkg.com/deno_testing/testing.ts"
import { test } from "https://deno.land/v2/thumbs.ts"
The language itself is based on TypeScript 3, making TypeScript a first-class citizen. It has TypeScript configuration built in, with Deno hooking into the TypeScript compiler, as well as using the V8 Snapshots capability and further caching within Deno to avoid recompilation where possible.
One of the most interesting things about the language is that they're also building security and sandboxing as a first-class citizen. The V8 engine has a number of sandboxing features which Node doesn't make use of, whereas Deno has been designed to work with it from the start. Any access to files, environment variables or the network will need to be allowed by the user at the time that they execute the program. Hopefully it won't get to the state where users will run the equivalent of --allow-all
, but we can hope.
Currently the permissions are i.e. --allow-network
or --allow-write
, whereas for true sandboxing, I'd want to remove the ability for it to read any files, unless I specifically mounted them in. Additionally with network permissions, I'd want to only allow a subset of URIs that could be used, which would help limit the ability for processes to perform dangerous operations.
Another interesting choice is to only allow these to be specified at runtime, and not allow the application / user to specify a "manifest" that can store these required permissions, although it's not yet set in stone as being the only option.
It was quite good to hear that lots of core language functionality are being delegated to Rust, the language Deno is written in, instead of being re-implemented in TypeScript for both performance gains and the code reusage.
A Geospatial Analysis of Twitter Content During the 2014 FIFA World Cup
This was a really interesting talk by Karina where she spoke about how using various processes to perform sentiment analysis of attendees at the 2014 FIFA World Cup. Although it's a sporting event, the discussions during the events were brought down to politics a lot of the time, especially if once a country's team got knocked out!
Running systems without setting your hair on fire
Dany spoke about the shift recently from developers building "shitty software" that causes "bad experiences for their customers" to the move to DevOps/SRE which means "devs have to own their crappy code".
Dany spoke about how we need to think about our customers, and the different error states they could be in - such as having slow internet or low storage which means they can't cache any of your site's images. Users will always have a different environment and set up to you, so you need to build for that, not your machine!
I've seen this before with development teams developing on the latest hardware with high resolution screens, only to find that the consumer of the tool they're building are on lower-end, lower resolution devices who then have a less-than-ideal experience.
Dany spoke about how doing our best requires us to think about failure, and how much we're willing to spend on it. Because reliability becomes logarithmically expensive to add more of it, you need to decide on how much you're willing to spend - especially as you can do as much as you want to all the infrastructure you own, but that doesn't mean the user is going to have a good state!
Linking in with Emma Button's talk Who Broke Prod? Growing a Culture of Blameless Failure at DevOpsDays, we need to talk about failure being inevitable, and how we need to understand how things work to understand how they'll fail.
Dany spoke about the biggest risks are introduced with change - when you make changes, it's more likely to break because things that are left as-is mostly are alright. But change is how you fix bugs, or deliver changes, so change is inevitable and we need to be careful to reduce risk while delivering good change.
Smaller changes, more often, is a great model and it helps avoid several months of work being lumped together in a massive production deployment. Continuous Delivery is a great pattern for this but you need to be very careful not to start measuring 'deployments per day' as a metric! Those deployments have to unlock value for a customer to be deemed useful, so instead we should strive to validate something like 'changes per deploy'.
Dany spoke about using feature flags as a safer way of testing in production, for instance allowing you to continuous delivery as a pattern, where you can target internal users, customers on free plans, or with a specific beta program. But Dany also warned us about the technical debt caused by feature flags and how they will increase cyclomatic complexity and need to be removed as soon as possible - the feature should be 100% live, or stripped out.
If you're making changes you also need to be in a state that will allow you to rollback any changes you make, in an automated fashion, to help you "escape broken deployments". These will not only help you recover quickly, but also help you be more aggressive with your changes, as you're more likely to be able to rollback those changes.
Finally, Dany spoke about observability and monitoring, questioning "how do you know your change didn't break something?" Dany told us that we should let robots do this monitoring, as they're more efficient and can save us some manual effort doing it and that if the system can tell us what's wrong, we should let it!
Dany pulled out this quote from Charity Majors, which is really great to keep in mind:
NINES DON’T MATTER IF USERS AREN’T HAPPY
— org-mode in the streets, angband in the sheets. (@hlprmnky) September 30, 2017
— @mipsytipsy, delivering the realness at @strangeloop_stl
Dany spoke about trying to provide graceful degradation - for instance, if an external service cannot be reached, instead of returning a 500, can you return some default values? It may be a better customer experience than the whole system going down which is bad for customers as they're unable to function but also could mean callouts as you may start to get instance recycling if you have healthchecks starting to fail.
There's also a decision between whether you "fail open" or "fail closed" - for instance, if an agent-facing support system wasn't working, you could still let a user stream music. But if the authentication/authorization service goes down, you should stop new logins / changes to account details.
Dany's experiences with production systems were really enjoyable (although it sounds like not necessarily for Dany at times!) and helped give some real insight into it, coming from working on slightly smaller systems.
Getting started with Kubernetes
Bastian's talk was my first real delve with Kubernetes, aside from reading the odd article and pretending I knew what I was reading about!
Bastian spoke about how Kubernetes is a great platform for container orchestration - helping you deploy, run, and scale your containers - while avoiding vendor lock-in, and even helping you run across multiple cloud providers and on-premises!
It was great to have some terminology busted, especially as I'm not very clued up, as well as seeing just how easy it can be to get things working. I felt after the talk I would have a lot more comfort getting my own Kubernetes cluster spun up, although my main issue is that I don't have any services that would need such scalability!
One comment that I've heard echoed is that the install of the stack is the easy part, but managing upgrades - especially major ones - can be a real pain. Moving to a managed Kubernetes service can save you a lot of time.
The learning curve was called out as quite difficult; even though there are lots of things you don't have to worry about, there are new things that you need to learn about to get using it effectively.
But why would you bother with it? It makes the development process easier as well as the scalability and scheduling problems, helping you take a single container and scale across multiple nodes.
Why you should become an editor on the HTML5 spec
Terence's talk was an amazing call-to-action that got at least my group of friends massively motivated to get involved, and helped strengthen my opinion on getting involved in the IndieWeb movement (more details in IndieAuth and Auth0).
He started by giving us a little history lesson with the state of the Web when websites were branded with "Best viewed in Internet Explorer", through to the World Wide Web Consortium (W3C) coming together with browser vendors to create a single specification for the Web and truly standardise it (HTML5).
Terence touched on Tim Berners-Lee's Olympics 2012 Opening ceremony This is for everyone
, talking about how the whole point of the Web is that it is for everyone to consume and add to.
As HTML5 is written as a living specification, it means you won't need to wait for an errata to be published, you instead just need to get the PR merged! As it was hacktoberfest, I scoured the repo for some "easy" issues that I shared with a few less confident friends who were looking for PRs, who can now say they're editors on the spec!
Terence spoke about the importance of documentation, and that it's both boring and hard to write, which means it's often not as good as it can be. Having fresh eyes to read through and determine if it's written in a readable and digestible format is hugely important.
Terence asked us, "what does WWW stand for?" And to our surprise, he told us we were all wrong and that it actually stands for English; the Web is dominated by the English speaking world, and the HTML5 spec is in technical English which makes the barrier to entry even harder.
The Web is for everyone, but there's a cost to being in the place that you can contribute to the discussions. The spec may be open, but the working group and the big picture conversations are driven by corporations, and "big browser" - things like DRM have landed, but worse could be around the corner. This call to action was very close to me and the idealism of the Web.
Additionally Terence spoke about how the Web isn't built by HTML any more - we've got so much lumped on top (i.e. through JavaScript) that it removes the semantics and drastically reduces our accessibility requirements.
For instance, the mobile web is huge but is built for thumb-first navigations. On desktops, it's very hard to navigate purely through the keyboard. There are so many assumptions that our users are dextrous, mouse-using people, yet the numbers show we're not.
It's not just the law to develop accessible websites, but it's the moral thing to do, and Terence's quote below is really powerful:
"if you can't afford to make your product accessible, you can't afford to launch your product" @edent asking me think at #hackference
— Lorna Mitchell (@lornajane) October 12, 2018
Terence called out that it's probably the majority of your users that are somewhat affected by accessibility issues, so why aren't you building your software to work for them? A great, thought-provoking, call to action.
Introduction to Modern Identity
Jeremy from Auth0 gave a great overview of what Modern Identity is. He gave us an intro to the fundamentals of identity being the persona you hold, which is often tied to a unique representation, such as a passport number, but can also be who you present yourself as to other people or to a service.
He spoke a little about what authentication and authorization are, and the role of identity in personalising experiences for users.
On the topic of authentication, Jeremy spoke about the different ways that you can authenticate, from Single Sign On to biometrics, as well as the up-and-coming passwordless logins. Passwordless is mostly done via email, but also available through other means like SMS, and usually consists of links that provide you with the ability to automatically log in, once clicked on.
Jeremy spoke about how passwords should be a thing of the past, and make it dangerous with the number of breaches going along, especially if there is credential re-use.
But the even bigger threat is trying to roll your own authentication. Jeremy warned against the risks involved, highlighting that he wasn't pushing us to use Auth0, but to use anything other than rolling it ourselves. By delegating to another service be it via social login, or an identity platform like Auth0, you'll be able to let them handle all of that.
This then unlocks the ability to look at the value-add features, such as adaptive risk multi-factor auth or cleverly detecting bot networks.
Jeremy went on to talk about how authorization should be all about requesting the right access at the right time, rather than asking for too much too early. This is something that I know Android tried to fix recently with the permissions system in Marshmallow, where you no longer accept permissions at install time, but when it's actually being used. By not requesting the authorization too early, you'll also make sure that a user is not exposed for as long with that access being enabled, allowing them to revoke access as and when they need, as well as not confusing them as to why you need so many permissions.
Jeremy spoke about how identity doesn't end at the login screen - we need to make sure that the data is secure at rest (i.e. on disk, in backups) and in transit (i.e. using TLS encryption on network traffic). Data breaches aren't about if they're going to happen, it's when, Jeremy stressed.
Jeremy talked about Multi Factor Authentication and that the factors are:
- "something you know", such as a password or your date of birth
- "something you have", such as a mobile phone with a pre-registered phone number
- "something you are", such as the Wifi network you are connected to
Although three forms is best, even having two of these is better than one!
As I work on Identity day-to-day, there weren't many concepts that I've not been exposed to before, but that didn't make it any less of a great talk, and helped to hear from another Identity provider in terms of what their views are on the topic.
It was interesting seeing how Auth0 are trying to support as many providers and authN/authZ flows as possible to make identity hands-off, and easy for you to implement without needing to roll your own. They're targeting Identity as a Service as their market segment, and everything I've seen seems to look great - I've recently been looking at an SSO provider for my own tooling which would mean I didn't have to set up and scale my own identity solution (as I do that enough in the day job!).
Although the product itself is not Open Source, there are a number of components and libraries that have been Open Sourced.
They're also trying to remove the monopoly on identity that i.e. Facebook and Google have. By removing a single tied logon, and instead supporting many through Auth0, you can break their grip on the Identity space, as well as helping your customers log in with the accounts they want to!
I'm looking forward to playing around with the platform a little more, and hopefully getting some single-sign on goodness for some of my (self-)hosted applications.
Serverless: The Missing Manual
James' talk was all about common questions around how to work with / migrate to serverless. As he's been to lots of conferences and events to talk about serverless, he's collated some of the top questions he's received and has a handy set of answers for them all.
"Where do I store my files?"
James spoke about the differences in architecture between a traditional three-tier application (with an application server, web server, and database) and the new architecture where static assets are stored i.e. on a Content Delivery Network, away from the functions. This is because you shouldn't need to interact with them as they should be all managed external to the function.
There is limited limited storage in the container's runtime but is primarily for scratch space, which is not shared between concurrent requests, and provides no external or persistent access! Additionally, files your function interact with need to be in the deployable package, which means if you're i.e interacting with 100MB media files, you'd need them all distributed in i.e. your JAR file - not ideal!
The idea is to instead use something like Amazon S3 as an object store for any static files, where you're given storage-as-a-service to avoid your need to manage anything about how they're stored. They're also unlimited:
Cloud object storage - as unlimited as your bank account @thomasj at #hackference pic.twitter.com/rrSpxLyI5A
— Jamie Tanna (@JamieTanna) October 12, 2018
By using an object store, you easily interact with objects using HTTP APIs, including allowing users to upload directly to your storage service or making those files directly accessible to users depending on their permissions.
How do I install my database?
James mentioned that in the new world, you can instead use one of the many database-as-a-service offerings, meaning you don't have to worry about setting it up and maintaining it. Additionally, this means that you'll be priced depending on usage, rather than spending money on the ongoing instance costs and maintenance requirements.
This model works better with an event-driven architecture as the database used on the back-end is less important, as well as it is usually built to be potentially infinitely scalable (billing-dependent) instead of the old model of database design where there would be an expected limited number of connections in mind.
Can I use <insert framework here>?
The answer was no, but more importantly James questioned why you wanted to use a framework. Usually frameworks make it easier to handle routing of HTTP requests to an internal method, but with serverless that's handled by the FAAS you're using. The scaffolding is often not required, nor is middleware like Cross-Origin Resource Sharing or Rate Limiting, as that can be handled well before your function is called i.e. on the way through your API gateway.
We often use frameworks as an easier way to package and deploy the application as a whole. However, serverless architectures provide you much smaller components which means that the packaging and deployment isn't as complicated.
Frameworks also make it easier to package and deploy your application as a whole, but because the concept of serverless manages a much smaller component, there's not
That being said, you have frameworks for serverless like The Serverless Framework which are primarily targeted for the actual build and deployment of the components up to the Functions-As-A-Service provider, such as AWS Lambda.
Managing credentials for your application can be done via IAM roles, environment variables or services like AWS Config Manager.
How do you debug?
As you don't have any access to the runtime environment(s), you need to have reliable logging. Following a 12-factor log approach and logging everything to stdout
and stderr
is best, as they should be picked up by your FAAS provider.
You'll also want to look at metrics such as how many invocations of the function are being made, what percentage are errors, how long the function takes to execute,s to execute and whether it's from a cold or warm start. Custom metrics such as CPU and memory, or use of garbage collection can be very important to keep an eye on as well.
One method of achieving this custom metric gathering is to log those details out to i.e. stdout
and then have some scheduled application to parse logs and publish these metrics for you to a more consumable service.
Can you set a limit on costs?
It's more likely that you can get in trouble with serverless due to the theoretical infinite scalability of the functions, only bound by your wallet. James' advice was that you should build rate limiting in as your first step, and add billing alerts as your second.
What about testing?
I asked this question, and was pointed primarily to Slobodan Stojanović's blog post.
What about code reuse and business logic?
One recommendation is to follow a "hexagonal architecture", in which you have different layers of composable responsibilities.
If you call other functions over HTTP you'll be creating a nice composable structure, but that will also lead to you incurring costs on i.e. invocation and network usage, whereas if you create a shared library, you then have to push updates to many places when that shared library is updated.
Announced at AWS Re:Invent was Lambda Layers which allows for code reusage across many Lambdas, without the need to bundle them in the actual deployable artifact.
Burnout and your Meat Computer
I'd really recommend having a watch of Jess presenting this same talk at EMF Camp as it's one of those talks you should experience to feel the full effect.
Jess started the talk by asking some questions and asking us to count how many affected us. After that she mentioned that not only are they signs of burnout, but also mirror symptoms of, and may trigger, mental issues such as depression.
So now we were all pretty worried that we're all in a terrible mental place, Jess spoke about what we can do next. Firstly, we need to need to seek professional help for any critical mental health issues including burnout. She mentioned that if you're in a personal/professional place in your life where you can just take months of time off to help your mental health, then do it! But most likely you're not, so you'll need to triage.
Jess recommended finding what you can cut down on that won't damage your life i.e. cutting down evening activities from five days to four, or outsourcing where possible - can you get food subscriptions, so you don't need to deal with decisions about food? Can you get a cleaner so you don't need to worry about cleaning the house?
Jess's key question was "what do you have to do to survive?" To be a better person long-term, you need to invest in self care short-term; therefore what's the least amount of stuff we can do to just get by, after which we can start to build more up. One of Jess's self-professed secret powers is selective emotional investment, stating that you can only care about so much in the world, and get involved in so many discussions, so is really good at thinking about which conversations/arguments/etc to get involved in.
She stressed that you need to recharge and let your mind and body recover, as it can take many many months to actually recover from burnout.
And finally, on the topic of stressing over the choices you're making, Jess spoke about the psychology of choice i.e. "should I say no to my friend who wants me to meet for dinner?" - unless the choice irreparably ruins your life, you'll look back at the choice and see it as okay. Use the best judgement you have at the time, and only spend the level of bandwidth on the decision that you can spare, relying on instinct if you're emotionally and mentally drained, as that may help you make the most informed choice.
Learn to say no, and not to over commit yourself - which incidentally is where Revert 'Some knowledge-sharing news' came in - I was definitely starting to feel that I was burning out while preparing Packt training courses, as I had very little spare time left in the week. I've detailed more of the reasoning behind the decision to discontinue the courses, and it's well worth a read to look at how I came to the decision, once I realised that it was causing me to burn out.
When you've burned out, you find you've got a great excuse to say no to things, and that it can give you more freedom, but it's too late for you to learn all about your ability to say no if it takes you to burning out. Instead, you need to start practicing your ability to say no as early as possible.
I burned out very badly last November when I was working on pushing the Capital One UK Web Servicing platform live. I was regularly working voluntary paid overtime as I wanted to see it through, but due to the slow feedback time of our stack, it was often cases of waiting around for hour(s) for a deployment before we would know if our changes worked. I managed to make it to the go-live date, with my manager making the first official login on the production system, and then I was like "yeah.. that's me done".
I had a Tech Nottingham talk on Kickstarting your Automated Build and Continuous Delivery platform with GitLab CI, but I was really not in the state to do it. I cancelled it just before lunch that day, which caused extra stress, and massive disappointment in myself, and on behalf of everyone who wanted to see it.
After I'd burned out, I spent about a week at home with a really bad cold, generally sleeping and not doing anything too mentally taxing. After a week being off, I was feeling mentally better, but still a bit ill, so I decided to work from home for a couple of days until I was back to health. This was a pretty bad time and a massive reminder that I'd been working in a non-sustainable fashion - which I'd known all that time, and knew it was taking effect, but I wanted to get everything over the line.
So this talk really did speak to me and I knew how that period after the burnout wasn't great, so I didn't want the team to hit that, either. I know that Steve and Kanagapal were having a really tough time while I was off with my Ruptured Appendix, and I know that they must've been burning out pretty hard. Things have been so much better since, but it's one of those things that we need to try not to reach the point that we think we're getting close to burning out.
After the talk, I came in on Monday and asked the team these questions, without any of the context. When I shared the reasoning behind asking, they were really shocked of the impact on their own mental wellbeing. Many of us identified that we were either getting close to or experiencing burnout.
Since the talk, we've been talking about it more and being more aware of the fact that we're all not in the best mental states and that we need to do better. This awareness has led to us making sure that when any of us are working later than we should be, or seem to not be in as good a place mentally, we push them to going home or at least taking a break.
Hackathon
Anna, Carol and I built a hack called New Login, Who Dis?, which added another step to common multi-factor authentication flows in Auth0, by combining "something you have" (your phone) with "something you know" (a 6-digit code).
This involved using Nexmo Calls in conjunction with Nexmo DTMF which allowed us to receive the passcode from the user in a super-friendly way. We decided this route over Nexmo Verify as it increased the factors that an attacker would need to have, leading to a more secure method of authentication. That being said, when demoing Carol literally had her 6-digit code on a post-it note, so it's not quite the perfect solution in a world that should be moving to a password-less landscape!
Sam from Nexmo mentioned that DTMF was still quite new, so we thought it'd be a great chance to get something fresh for him as well as giving us a new angle on a hack - it must be hard judging lots of hacks using the same tech, so getting the chance to try something new and hopefully get a new angle was good.
New Login, Who Dis? consisted of the following components:
- Auth0 tenant domain
- Auth0 Rule to
authenticate-with-nexmo
- AWS Lambda to trigger a call to Nexmo
- AWS S3 Bucket to expose Nexmo NCCO publicly for the call to occur
- AWS Lambda to receive response from Nexmo's call to the user
Anna and Carol took point on the Lambdas, with Anna trying out the Serverless Framework that had been recommended in James' talk as well as Carol putting my limited node knowledge to shame with her awesome skills! Practicing a top-tip from the Serverless DevOps Open Space at DevOpsDays London, we resolved to make our Lambdas as small as possible and not blocking for IO - instead having another lambda which could be invoked after the Nexmo call had completed, to allow us to respond to the code that was entered by the user on the call.
In true roll-your-own-authentication guise, we built the authentication system very hackily, hardcoding the password check in advanced-security-module.js
, and only enabling it to work with a single user. However, it would be relatively straightforward to allow our AWS Lambdas to provide them access to some state around the user that allows the Nexmo callback Lambda to update the Auth0 user's metadata to inform the system that they had gone through an MFA flow successfully.
var checkPassword = function(input){
var password = "756984";
return (input == password);
}
module.exports.checkPassword = checkPassword;
We also felt that there must've been a better method to inform Auth0 that we weren't yet available to finalise the authentication request, but instead decided on flipping a boolean flag that denoted whether the user had authenticated. This of course wouldn't be thread-safe, nor would be related to a single authentication request, allowing an attacker that had access to a related login session the ability to wait until a user had completed their MFA via Nexmo, and then would be able to steal their session.
Our original horrible UX for the flow would be to block the Auth0 Rule until a response came back from the Nexmo call, but that was a) horrible but (more importantly) b) wouldn't work! Due to the Auth0 Rules (rightly) timing out after ~30 seconds, we realised that we needed to follow a non-blocking approach. This pivot meant we had some rearchitecting to do, as once the Rule had triggered the call, there was no way to let it know the user had logged in; our end UX was just to make them attempt to log in again, which then would allow them to log in as their MFA had succeeded.
Another refactoring step would be to take advantage of WebTasks more heavily, but we felt we'd stick to the devil we knew, AWS Lambda.
Finally, as these Lambdas were being published in a publicly accessible form, we would need to add some further steps onboard to prevent unauthorised requests from coming in. It'd make sense to require an API key to hit the Lambdas, via AWS API Gateway, as well as maybe even looking at verifying data coming "from Auth0" to ensure it's signed by the correct signing key for the tenant domain, ensuring authenticity of the request.
We also shoehorned Cloudinary in so it could display the cat images post-login - with some wacky effects.
I spent a good bit of time working out how to unit test my Auth0 rules, as I couldn't find anything online about it. I found a solution, and am hoping to have a guest blog post on the Auth0 site soon - watch this space!
IndieAuth and Auth0
While on holiday a couple of weeks prior, I'd been able to spend some more time reading about IndieAuth and the IndieWeb as a movement. Again, while wanting to explore Auth0 and to be able to say I had my own Identity Server (given I work on Identity daily, and am a huge fan of the idea of true Single Sign On), I thought playing around with extending Auth0 to support IndieAuth.
Implementing IndieAuth on my website is something that I've been looking at planning at implementing into my site and services for the coming year, so expect to hear more about it in my 2018 in Review blog post and upcoming issues on the issue tracker.
Well before the hack, I'd mentioned if all else failed and I didn't have anything else to work on, my fallback would be to look at IndieAuth. Then, after talking about it with Anna and Carol, we decided that it'd be a great thing to play around with, especially with Auth0 there. I spent a while looking into Auth0's documentation to determine how to create a Custom Social Provider which would handle IndieAuth for everyone. However, I couldn't quite find a way to dynamically discover a provider's endpoints, as they had to be set each time - this was something that I very well could've asked Jeremy and Luke, who would've told me that it wasn't supported... Woops!
Although you're able to create a Custom Social Provider, you need to know both Access Token URL and Authorization URL ahead-of-time, which is unfortunately not possible due to self-discovery aspect of the flow that's built into the protocol. I'm interested to chat some more with the folks at Auth0 in terms of feasibility of implementing it, or whether it'd look like it'd need to integrate with some common IndieAuth providers, such as IndieLogin.
Watch this space!
Hacktoberfest
Last but not least, we actually managed to get a tonne of Hacktoberfest PRs in on Saturday night, while we were having a bit of a hacking break in the evening.
It was quite nice to have the hack progressing at a pace that we were able to take quite a long break, which I think was down to planning an easy-to-achieve hack within the timelines we had, as well as splitting down tasks quite well so we could all easily work in parallel.
Prizes + Closing Words
Well, we managed to win not just Auth0 Hacktoberfest t-shirts, but the NanoLeap Starter Kits, which completely blew us away!
There were some other great hacks, including, but not limited to:
Jess Crees and her teammate built a great app called ComplementBot, which was a really wholesome hack which would call you to give you a complement. They'd had some trouble getting all the integrated pieces set up for the demo, but it was really awesome to see them winning two prizes for what they had - it was a lot of great tech, and as mentioned, super wholesome and a really great idea following Jess Rose's talk about burnout.
Being apparently known as "the Python guy" by Mike (for my previous hacks being mostly in Flask), I was able to give a hand with their hack, as we'd finished our hack and I know how horrible it can be in the last hour where things stop working! We were able to get the deployment up to Clever Cloud successfully in time for the demos, which I was super happy with.
Dan's hack, Presenter-As-a-Service, was a really great idea and good use of Cloudinary. Although primarily used as a tool to troll Mike, it was a great idea to allow splicing images of conference talks with any slide you wanted. I could actually see myself use this in cases where the photos of me speaking at a talk may not have the greatest shot of the slides in the background, so I'd be able to touch it up to get a clearer photo out of it.
It was also great to see a young hacker join us for the Sunday morning and make a Harry Potter Web VR game in a matter of hours - it was really awesome to see someone younger getting into hackathons at that age, as well as being able to build some awesome stuff. I wouldn't have been able to get the same amount done in a few full days, let alone a matter of hours!
There were also some great hacks just around wanting to try something new like CSS Grids, which may not be (quote) "cool" or prize-worthy, but are still really useful for someone to have learned over the course of a weekend. I'm always a fan of Hackference's hack as it's always super chilled, and doesn't feel as pressured to "always be bigger" or for you to get super competitive to to try to win the prizes.
I want to say another massive thanks to Mike for an awesome three days, he does such a great job, and looking around the faces of everyone at the closing speeches, everyone had such a great time and appreciated it loads. I don't know about you, but I'm definitely going to miss it when the myth of The Last Hackference does come true!