Blog

  1. Swift Core Data Format String Injection

    Or how I developed a love/hate relationship with format strings

    The last couple of months since WWDC have been an interesting exercise in forgetting the complexity of Objective-C (ObjC), falling in love with Swift, and then realizing Apple hasn't completely dropped ObjC for Swift. Programming an iOS application in Swift quickly becomes a translation effort in converting ObjC code to use Swift syntax and format, since even Apple's own developer site still references ObjC examples in documentation and Class References. In addition, the Xcode 6 beta and Swift are not a complete match. Auto-generated code often fails when using Apple's provided templates, including Cocoa Touch Classes. Pay attention to the compiler errors and warnings before trying to actually deploy anything.

    This post explores Swift's interaction with Core Data and how to break (and secure) format strings using wildcards and injection techniques. Core Data's is Apple's object graph and persistence framework that makes it easy for Mac and iOS developers to store and retrieve data without the overhead of dealing with databases or other network services. If you are unfamiliar with Core Data, Apple's tutorial is quite extensive and can take some time to get through. I would recommend Techtopia's iOS 7 Core Data tutorial to get the basic gist of working with the technology. The one issue is that most tutorials still only address Core Data use with ObjC, so translation to Swift is something you'll have to figure out.

    Interaction with Core Data hasn't changed much between ObjC and Swift. The following Swift code snippet shows how a test application logs into an iOS application based on the 'User' object that is stored in as a Core Data object.



    The function is fairly straightforward, but the interesting part to a security professional is the NSPredicate declaration, which is defined as a format string that limits what data is returned from Core Data.



    This format string should set off all sorts of alarm bells to anyone creating authentication routines. This application does not follow proper security best practices when creating the predicate or building the associated format string.

    First of all, the use of the 'LIKE' statement enables simple bypass of authentication within this app. A failed attempt at logging into the app results in:



    But the use of wildcards (this is the star '*' for format strings)



    results in successful entry into the app.



    As in creation of secure SQL statements, use of the 'LIKE' keyword within NSPredicate creation should be avoided for everything but search utilities. Not only does it allow an attacker to present wildcards to Core Data, it also allows an attacker to enumerate through all of the relevant accounts in the database (a*, b*, etc). In fact, any keyword that allows the use of wildcards (e.g. LIKE, CONTAINS) or uses them implicitly (BEGINSWITH, ENDSWITH) should be avoided. A full description of format string syntax can be found in Apple's Predicate Programming Guide.

    Remediation of wildcard injection vulnerabilities is a fairly simple matter. First of all, convert the format string 'LIKE' keyword to '='. A 'LIKE' query is unnecessary within the authentication routine.



    This simple change corrects the easy authentication bypass seen in the previous example.



    However, the format string is still vulnerable to injection and can be bypassed by using more complicated predicate instructions. For example, entering the string ') OR 1=1 OR (password LIKE '* into the username field allows an attacker to successfully bypass the authentication requirement.



    This attack is very similar to SQL Injection and is the result of string concatenation during format string creation. To complete the secure format string, move the user input to format string objects that are added at runtime as follows.



    This fully escapes any attempts at injection using double quotes, which are required to escape out of the format string when built in this manner.



    Finally, our application login function is secure from format string injection.

    As an application penetration tester, identifying format string injection points is critical to determining the security of an application. During an assessment, identification of Core Data use can be difficult. Some fields may be linked to user preferences or backend web services. The only sure way to identify these vulnerabilities is through source code analysis. Given client limitations in providing an Xcode project for analysis, fuzz each available field with the values identified in this guide, including the single quote ('), double quote ("), and wildcard character (*).

    In future blog posts, we will explore other mobile security issues as they relate to Swift and iOS development, including the OWASP Mobile Top 10. In the meantime, feel free to reach out to me (seth [at] nvisium.com or @sethlaw) with issues or questions related to Swift Security.

    Happy Hacking.

    --Seth

    Seth Law is the Director of Research & Development of nVisium and wrangles the internal and external research efforts to improve understanding of application security. He spends the majority of his time thinking up new ways to secure web and mobile applications, but has been known to code when the need arises.

    For the past 12 years, Seth has worked within multiple disciplines in the security field, from software development to network protection, both as a manager and individual contributor. During the last few years, Seth has honed his application security skills using offensive and defensive techniques, including tool development.

    Seth is currently involved in multiple open source projects and is working with others to advance the state of mobile security testing tools. He has spoken previously at Blackhat, Defcon, and other security conferences.

    Seth has worked across multiple sectors in the last 14 years for companies including Iomega, Early Warning Services, Fishnet Security, and Zions Bancorporation.
  2. Intro to BurpSuite Part IV: Being Intrusive

    Welcome to our 4th installment of Intro to BurpSuite. This time around we're going to focus on using another tool in the BurpSuite arsenal to send targeted requests to a web server, rapid-fire. Intruder can be used for a variety of fuzzing and bruteforce techniques using premade lists or automatically generated input. This is amazingly useful for those list-based tasks as well, such as mapping a site or discovering hidden directories and errors.

    Intro

    Before I get started, I should mention that I'm using the same environment settings I created in the first part of the Burp series. I recommend reviewing this post if you're new to Burp and are just getting started.

    http://blog.nvisium.com/2014/01/setting-up-burpsuite-with-firefox-and.html

    Secondly, we're going to be using some pre-made lists and Burp-generated lists with Intruder, so if you want to follow along exactly, please download the wordlists from SVNDigger. They're a great starting point and can really help with that first step.

    https://www.netsparker.com/blog/web-security/svn-digger-better-lists-for-forced-browsing/

    With that all out of the way, we can get into the meat of it with Intruder.

    As I mentioned, we're going to use Intruder to send a massive number of requests, so be aware that unless you lower the request options (which we'll touch on), this can be a noisy attack. Staying under the radar can be a concern for some red team exercises, but for the purposes of this article, we're going to be loud and obnoxious.

    We're going to use a basic exercise on hackthissite.org to demonstrate Intruder's capabilities, but we're only going to cover one of the primary functions: Simple list. Simple list allows you to attack with a pre-made list (the one from SVN digger that I mentioned, in this case) and Brute Forcer allows you to specify a character set which Burp will then use to generate a list on the fly. This is useful for random values such as passwords with a known character length.

    In the first example, we're going to look at basic exercise number 3 which is located here:


    Keep in mind, there's an easier way to beat this "challenge" but we're using this site for demonstration purposes, and we'll leave it to you to use this technique creatively to perhaps tackle some of the more advanced portions of the site as a learning tool.



    So we can see that we have a password field and not much else. We also can see that the description mentions a password file. Without much knowledge of the site (spoiler alert), we may be inclined to view the source and find some interesting tidbits of information, but in this case, previous experience with hackthissite.org would point us to start sniffing out php pages. And that's exactly what we're going to do.

    The Setup

    If we send the request to Intruder...



    we can see hackthissite.org over 443 on the Target tab. If we take a look at the Positions tab, we see some interesting portions of the request highlighted in orange. These are assumptions BurpSuite makes for possible entry points. These are merely suggestions, but we just want to do some discovery, so let's take a look at manipulating some of the information in the request.

    First, clear the current field sections using the button to the right.



    Then put your cursor after "GET /missions/basic/3/" and click "Add §" twice.



    You'll see two section signs (§) highlighted in orange, and you'll want to append ".php". Since we're doing discovery and we have an idea that this is a php based site, we're appending a file extension since it's the most likely to occur.

    It should look like this:



    Now, a little explanation of what's going on here. We've cleared all automatically created sections and added our own. This section sign pair (§§) indicates that we're going to insert our payload between these two points. So if our payload was the word "admin" it would send the request with:

    GET /missions/basic/3/admin.php

    Burp will send every word in our payload through that entry point. This means that Burp will be sending a large number of pre-determined requests to the server without having to manually enter each one into Repeater or through the proxy. We can then view the results in a consolidated view.

    Next, let's take a look at the Payloads tab.

    Defining our Payload

    First, we want to define our payload set. For this demonstration, we're going to choose Simple list and load the list from SVN Digger.



    Then we're going to load the list under Payload Options. Click the "Load..." button and pick the all-extensionless.txt file. We are choosing this file because we defined our extension in the positions tab as ".php"

    If done correctly, you should see a list like this pop up in the Payload Options section.



    There are some other options, but nothing we have to worry about at this point.

    Launching the Attack (or the Discovery in this case)

    Let's go ahead and run the attack. Beginning Intruder can be a bit unintuitive at first. Select "Intruder" from the top of the window in the menu, and click "Start attack".



    This will begin the attack, and you'll be greeted with a results window. Click the Status column to sort by the response code.



    It shouldn't take too long to see that "password" returns a 200 response. If you take a look at the response in the web browser, you'll see the password of the password file. Entering that into the password field will pass the challenge.

    About that Throttle

    I mentioned at the beginning of the post that this was going to be noisy, and I meant it. If you launched this sort of discovery on a pen-test, you would probably raise some alarms. Since we're hitting a site that is meant to be attacked, we don't have to worry about it so much. If you're authorized to go full throttle on a site, this would also be fine, but if you're trying to remain stealthy, it may be a good idea to take a look at the throttling options offered in the Options tab.



    This Request Engine section gives you control over throttling, threads, and retry options, and even allows you to delay the start of the attack. This is useful if you want to send requests with a delay in order to limit the chances of defense discovering your attack.

    Some Afterthoughts...

    Now, I want to end this post with the idea that this is simply a demonstration of BurpSuite's Intruder to introduce newcomers to the interface. If you ran the page through the proxy, you may have noticed that the password.php file was referenced in the parameters and we could have achieved the same results without Intruder, but the beauty of offensive techniques is that you can arrive at a positive result in a variety of ways, some more complicated than others. 

    Intruder also has many other payload options, including BruteForcer, which allows you to specify a character set and length to your payloads. This is especially useful when attacking passwords where you know the complexity requirements, and it's especially effective against sites with weak complexity requirements.

    There are a few other, more advanced techniques that allow you to use Intruder with a great deal of imagination and creativity to get some interesting results. The tool is built to be versatile and it certainly succeeds in that respect. I don't want to go down the rabbit hole, but I will be posting more information on some of the more advanced Intruder functions later in the series as we wrap up the modules. For now, we're just getting warmed up, and I encourage you to stay tuned for more.

    Ken is a Senior Security Consultant at nVisium. He works hard to defend our clients' web applications and provide real solutions to their security concerns. Ken loves his technology and can always be found researching new languages, gadgets, applications, and hardware. Ken began his career in software product management, but quickly realized he'd rather be down in the weeds. Armed with the project management mindset, he dove head first into networking and development, and came out with a passion for security.

    Ken is creative at heart and has an innate desire to provide an environment where clients are excited to learn about and implement good, proactive, and efficient security practices that compliment an organization rather than hold it back. Ken has worked in the IT industry for 7 years for companies such as HyperOffice, LivingSocial, Citrix and even the US ARMY which has enabled him to gain experience in all walks of business from a humble startup to a fully fledged enterprise, and he loves every waking second of what he does.
  3. The Role of a Designer in an Application Security Company

    Having recently started at nVisium as a designer, my role in an application security oriented company is clearly unlike most others on the team. I joined the company as part of the effort to expand the new development team while the majority of my colleagues are in the consultation service that the company has its roots in.

    As for myself, I am a graphic designer by trade, with some side knowledge of front-end web development and user research, yet my job so far consists of 70% front-end development, 20% user research, and 10% graphic design, which means most of the time I am given the challenges outside of my realm of familiarity. But that was not my foremost concern.
    ____________

    Here are some of the things I have noticed in the past four weeks:

    Integration

    Whenever you become part of a new environment, there will be the initial learning curve of trying to understand how to work with everyone, as people have different working styles. However, joining the team bearing a new role steepens that curve quite a bit. On top of adjusting to the personalities and work habits, it was quite evident that the question “Can I ask Hong to do this?” was always being asked because the team did not have a dedicated in-house designer prior to my arrival.

    Expectation

    I used to work with designer peers, so it was not until I joined nVisium that I realized the amazement people can have toward my work. For example, I recently produced this logo animation in a two hour time frame:


    ...and it led to our CTO, Ken Johnson, jokingly telling me, “From now on I am just going to shut up and let you do whatever you need to do.” While it is indeed pleasant to receive positive encouragement, this is problematic as well because (1) I am aware that I am not a magician and my knowledge only covers a few fields of design, and (2) my colleagues trust that I always have the ultimate say with anything design-related, leading to hiccups in feedback, which I will explain in the next point.

    Feedback

    The importance of feedback cannot be overstated to design professionals. Unlike code, a design can only be evaluated by observing other people responding and/or using it. In an environment where there is more than one designer, members within the team can critically assess and critique each other’s work as it develops over time, thus producing products with finer quality before they go out to the customers. It is trickier in my current situation as I am the sole person in such role. Once in a while, I will find myself stuck with nuisances that would otherwise be easy to resolve if there was another pair of trained eyes. However, there are times that I do receive suggestions that are valid and yet not applicable because of the constraints of time, resources, or experience. In this case, offering an appropriate rejection is a delicate matter: You certainly do not want to give the false impression that only you are entitled to say “yay” or “nay” in a design process because that will stall the feedback loop and sour your relationship with your colleagues.

    Prioritization of Tasks

    I thought I had a lot of tasks while I was in college, but working for a startup has taken the challenge of managing my tasks to another level. At any given moment, there are at least 10 things that need to be done, and most of them can be broken down into smaller parts, doubling or tripling the actual number. It can be as small as increasing the leading of a text paragraph on the home site, or as time-consuming as putting together a graphic style guide (which can be broken down into at least 6-8 parts) to be used until a rebrand is introduced to the company. The battle between time and ROI is constant as I move from one thing to another. To use an example, I use a mind map to manage my to-do list and my tasks translates to something that looks like:


    Importance of Security

    This is the most fascinating part of all because it defies all of my previous knowledge—We are a security company so oftentimes the importance of security is more emphasized than other factors, even if it might sacrifice UX slightly. To give an example: Just last week, I was discussing with Ken about the inclusion of reCAPTCHA to our home site contact form and whether or not it turns away users who find it challenging to use. This is not to say we do not value UX in our development; we are simply well aware of how insecure features can cause harm to both ourselves and our users in the long run. In a non-security based team, usually it is the, if not the most, overlooked part of the operation. In fact, I have been in situations where million dollar proposal documents are transferred many times through insecure networks that can be easily intercepted, but I wouldn’t have known this until I started learning secure practices from my current colleagues.
    ____________

    My observations thus far have led to a single question: What are the most crucial responsibilities I have as the first designer of the team? Considering that I just joined not too long ago, I do not have a comprehensive answer yet. It is safe to say that there will be a follow-up post on this topic a few weeks down the road. Until then, let’s see if I can trim down my mind map faster than it further branches out.

    Hong is nVisium’s lead designer. He is responsible for everything from producing infographics to accompany the team’s research to designing the nVisium.com landing page—We keep him very busy. Coming from a graphic design background, Hong is fascinated by the complexity of human behavior and he is particularly interested in UX design. He is also a semi-polyglot (Klingon not included) whose brain occasionally fails to function properly because it is processing five different languages.

    When he is not occupied, he enjoys reading Japanese comics, playing turn-based strategy games... and reading more psychology related books.
  4. Getting Started with Android Wear Security I: Introduction

    The first Android Wear devices shipped this week and we were excited for our new toys to arrive.



    For Wear to be useful, you need to pair it with a phone or a tablet via Bluetooth, and your device needs to be running Android 4.3 or above. On your handheld, you have to install the Android Wear app. Neither the Gear nor the G have Wi-Fi or NFC, making Bluetooth your primary way of accessing data.

    Wear applications have a companion app on the paired device. To install an app on Wear, you install it through Google Play on your mobile device. An APK is installed on both your handheld and Wear. When you create an application with Android Studio, you can now select the form factors you want your application built for, including phone/tablet, TV, Wear, or Glass.



    When you install an app with a Wear component, the Wear APK is automatically pushed to your watch.



    The Wear app and companion app can be signed separately but should use the same package names as each other. They are essentially independent applications running in isolation and connected via Wear’s APIs for exchanging data. These APIs include:

    Setting Up a Test Environment

    If you are using a physical watch for testing instead of an emulator, the first thing you’ll want to do is root your device. If you have an LG G, here is a tutorial on getting your bootloader unlocked and loading the boot image, giving you access to superuser privileges.

    One thing to note is that you can set up debugging for your watch either via Bluetooth or via USB. If you want to walk around with your watch or move around freely, then Bluetooth is your best bet. To set up debugging via Bluetooth, follow the instructions found here. You'll need to configure both your watch and handheld to enable debugging.

    What's Different?

    Compared to traditional mobile apps, Wear apps are pretty lightweight. Most of the heavy lifting is supposed to be performed on your handheld, with notifications and messages sent between the devices. Wear apps communicate with their counterparts on the handheld, which makes all calls to remote services on behalf of the Wear app.

    There are no WebViews on Wear currently. This means that issues like Cross Site Scripting (XSS) or Cross Site Request Forgery (CSRF) are less of a factor within the Wear app itself, but still have to be considered if your handheld app implements WebView functionality.

    Adding the Wear component extends the trust boundaries for existing applications. This includes handling untrusted data received from Wear as well as securely storing data that's replicated to the watch. Encryption schemes may need to be extended to account for distributed storage and the need for real-time replication.

    Intents and IPC are still utilized by Wear, allowing other applications to inject malicious data or to gain privileged access to an exposed component. If you receive data from the Wear app and use it to issue authenticated requests to a web service, you should ensure that these workflows are sufficiently protected from unauthorized apps. 

    What if I lose it?

    Good question. Out of the box, both the LG G and Samsung Gear Live don't expose the ability to lock your watch or protect it with a passcode, swipe sequence, or biometric data. As a result, it's pretty trivial to compromise a lost or stolen device using the instructions given above for rooting.

    Up Next

    In the next post on Wear, we will dive into some code examples illustrating ways that we foresee developers introducing security issues into their wearable apps. So, come back soon!

    Jack is the CEO at nVisium and loves solving problems in the field of application security. With experience building, breaking, and securing software, he founded nVisium in 2009 to invent new and more efficient ways of protecting software. Jack is a leader of the OWASP Mobile Security project and contributed to the development of the OWASP Mobile Top 10 Risks. He is an active mobile application security researcher and focuses on creating techniques for making application security scale effectively.
  5. Protecting Third-Party Services I: SMS Gateways

    Recently during an assessment, I discovered that a client had several functions within their web application that leveraged third-party services for sending SMSs and emails. These particular functions were available publicly to un-authenticated users.

    Upon testing these services I had discovered that they weren’t protected by any anti-automation or throttling mechanisms. What this means is that an attacker can repeatedly send requests to the application, leveraging a tool such as Burp Suite’s Intruder.

    In part one of this two-part post, we’re going to be talking about attacking and protecting SMS sending functions within applications.

    So let’s talk about the SMS sending mechanism

    This function was designed to provide the users with an easy method for downloading the mobile app. It worked by requesting that users enter their cell-phone number, and the system would send an SMS containing a link to the mobile app download.

    Unfortunately, this application didn’t support throttling. As such, a single user could send a large number of SMS messages to a desired phone number. This is bad for a couple reasons.

    Reason One: The Company Might Have to Pay

    Let us pretend we are a malicious user who decided to target this particular service with the intention of causing the company to incur costs associated with the service.

    Most often, companies, especially startups, leverage services such as Twilio for sending SMSs. The cost of using such a service is typically cheaper than rolling out an entire infrastructure. Each of these services has a different pricing model, but most involve some cost per message sent.

    As seen below, in the case of Twilio, it costs $0.0075 per message sent.


    As a malicious user with the goal of costing the target organization money, we want to find some method of sending large quantities of SMS messages automatically. Because we have identified that the specific mechanism doesn’t leverage throttling, we use Burp’s intruder to repeatedly send HTTP requests to the SMS sending endpoint. In this instance, we don’t necessarily care who receives the messages as long as we’re able to send lots of them. So we set the phone-number parameter to some arbitrary phone number and configure Intruder to use a null payload. In this case, we can set the quantity, which we have configured to 100.



    So with the Intruder attack, we send 100 HTTP requests to the /sms/send method, which in turn requests the SMS gateway to send 100 pre-defined SMS messages to the number of our choice. As we can see, I now have 100 new messages in my inbox.


    With this particular client, we were able to send an average of 10 requests per second, which translates into 10 SMS messages per second. If they were using the Twilio plan that charges $0.0075 per text message, we would have caused the clients to incur a cost of approximately $0.075 (7 cents)/second. This translates to approximately $6,500/day!

    Reason Two: The User Might Have to Pay

    If the attacker is more interested in targeting a specific user rather than the organization, we can leverage the same attack outlined in reason one but against a victim's SMS number.

    Some SMS service providers still charge users to receive text messages. For example, I was recently traveling internationally, and every time I received a text message, I was charged $0.05. In a typical situation, that’s not bad because I’m not likely to receive more than 20 messages a day while on vacation.

    Unfortunately, if someone had a grudge with me, they could really cost me some money by automatically sending me SMS messages.

    Given the same attack outlined above, an attacker can send approximately 10 SMS messages per second. At a rate of $0.05/SMS, that costs me $0.50/second. That breaks down to a whopping $43,200/day!

    There’s no excuse for this

    Now admittedly, it’s not likely that a cell-phone provider would actually stick you with a $43,000 bill, but I wouldn’t want to be the organization responsible for it! The best approach is to ensure that, as an organization, you are not responsible for incurring such costs. This can be accomplished by adequately protecting this and all other third-party integrated mechanisms.

    In this particular situation, it is unlikely that a user would be leveraging this function very frequently, and as such, we can consider implementing a throttling window of one minute per request. If a user attempts to leverage this function more than once per minute they will receive an error message indicating that they must wait.

    With such a window, we can recalculate the cost to the company and the cost to the user. There are 1,440 minutes in the day, and as such, a malicious user will be able to send at most, 1,440 requests per day. This would cost the organization up to $10.80 per day and some victim up to $72 per day, based on cost models identified above.

    Conclusion

    Even though you may not consider a function sensitive, an attacker may still attempt to take advantage of it. In this real-world scenario, an attacker could leverage this simple function that was designed to provide a better user experience to cost the organization and victim an obscene amount of money.

    If the organization had implemented a throttling mechanism, they could have drastically reduced the cost of worst case scenario.

    If you’re concerned that your organization may be vulnerable, feel free to contact us. We’d be happy to talk to you.

    John Poulin is an application security consultant for nVisium who specializes in web application security. He worked previously as a web developer and software engineer that focused on building multi-tier web applications. When he's not hacking on web apps, John spends his time building tools to help him hack on web apps! You can find him on twitter: @forced_request and on myspace: REDACTED.
  6. nVisium Welcomes Seth Law as the Director of Research and Development!

    nVisium is proud to announce that Seth Law has joined our team as Director of Research and Development. Seth brings years of experience in the consulting, development and research world to nVisium. Seth will be helping shape nVisium's consulting and products through his work with new technologies and ideas. nVisium prides itself on providing security consulting services and products for development teams. Seth's work will ensure nVisium stays ahead of the curve.

    Research and Development (R&D) at nVisium is focused on Application Security, from mobile to web applications, encompassing any aspect of security that developers are involved with. The group includes many nVisium employees, from consultants to executives, that are interested in exploring application security technologies and improving nVisium and the industry as a whole. nVisium's internal R&D activities will be the bridge between nVisium and the security and development communities at large.

    It is an exciting time to be involved in the technology industry. New and existing technologies are increasing exponentially and improving life in the process. nVisium R&D efforts will focus on these new technologies and provide tips and tools to create secure applications. Already, nVisium R&D contributors have been primary sources for multiple open source efforts, including RailsGoat, Grails.nV, and the OWASP Mobile Security Project.

    In addition to continuing efforts with the projects listed above, nVisium has internal R&D projects on the roadmap with key technologies. On the top of the current stack is Swift, Apple’s new programming language for iOS and OS X development that significantly reduces the pain in learning to produce applications. Our team has been diligently learning the ropes of Swift and will be releasing videos, blog posts, and applications that target Swift security issues.

    Many members of the R&D team have past experience with "enterprise" languages, such and .Net and Java. This expertise will be on display when using pseudo-scripting languages, like Scala and Groovy. While Grails.nV is a good start, additional projects have been planned to bring these newer technologies into the fold and help developers get a good grasp of risks and gotchas associated with them.

    In summary, nVisium R&D is our way of staying relevant in an ever changing technology landscape, while helping the community do the same. We know that not all of the content produced may be useful to you, but would like to start a dialog with interested parties and projects. Feel free to reach out to Seth and the nVisium R&D team at research [at] nvisium.com and tell us what you think. 

    nVisium Tools: http://www.nvisium.com/resources/tools
    OWASP Mobile Security Project: https://www.owasp.org/index.php/OWASP_Mobile_Security_Project

  7. Javascript Security Tools

    The world of Javascript is exploding these days with new frameworks, libraries and tools. Everything from Node.js to Backbone.js is becoming more popular for new development and integration with old projects. This presents new issues for security teams and consultants trying to test and protect those applications. Luckily, there are quite a few options for automated and manual Javascript security testing. In this post we'll go over a few of them and how they can be used to increase the security of your Javascript code.

    Retire.js

    Similar to Dependency Check or Bundler-Audit, Retire.js looks at your third-party libraries and find any publicly disclosed vulnerabilities that apply. That tool is especially useful when used in conjunction with a CI server to automatically monitor for new vulnerabilities in your third-party libraries.

    Retire.js run against Railsgoat, the vulnerable Rails application.

    Retire.js can also be used as a Chrome or Firefox extension to notify you of out of date libraries in use on a site. This can be useful during application assessments.


    ScanJS

    ScanJS is a static analysis tool for JavaScript. ScanJS will create an AST of your JavaScript, parse it for common sources and sinks and report security issues. It includes 107 rules ranging from DOM XSS to  usage of sensitive APIs. ScanJS can be run as a local server or from the command line. The web UI will allow you to upload files to be analyzed.

    The ScanJS ruleset. 

    ScanJS results include the rule and line number.

    JSPrime

    JSPrime is another static analysis tool built for JavaScript security testing. JSPrime is similar to ScanJS but it's built on top of  Esprima, the ECMAScript parser by Aria Hidayat. It also parses the sources and sinks of to detect common DOM XSS vulnerabilities.

    JSPrime output indicating DOM XSS 
    JSPrime can be run as a server locally, where JavaScript code is analyzed. The results are displayed in the web UI and include the sources and sinks for each result.

    None of these tools are as simple as click 'go' and report. The issues that are reported take further research and validation before they could be considered confirmed vulnerabilities. Nonetheless, these tools offer great insight and a starting place to secure your JavaScript projects.

    This post only touched on a few of the many tools available to help secure your JavaScript code. We also recommend looking at Dominator Pro, the DOM XSS scanner, Helmet, the security middle-ware for Node.js applications and the DOM-XSS Scanner Checks for Burp. It's good to know as the world of JavaScript development expands with new frameworks and libraries, the tools and techniques to secure them are evolving as well. We'll follow up in future posts with more information on how to secure your JavaScript applications.

    Mike McCabe is the Director of Professional Services at nVisium Security. In his free time he likes to build and hack on open source projects. He's a big fan of Burp and set -o vi in his bash profile. Mike also serves as a board member for the OWASP NoVa chapter.