Saturday, October 29, 2011

Hoppity Solution

I recently stumbled upon the facebook engineering puzzles located here and decided to try and do a few myself.  Hoppity is the first and easiest one, but I thought I'd take a stab at it here.  I'm not really sure what facebook would be looking for in order to get a job interview, but it would be have been cool to be able to view the submissions that got jobs. Please note that I didn't bother with the file reading code, but you can easily add it yourself.


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

namespace HoppitySolution
{
    class Program
    {
        private const string DIV_3 = "Hoppity";
        private const string DIV_5 = "HopHop";
        private const string DIV_3_AND_5 = "Hop";

        static void Main(string[] args)
        {
            var maximumInt = GetMaximumIntFrom(args);
            Enumerable.Range(1, maximumInt).ToList().ForEach(x => ProcessHop(x));
        }

        private static void ProcessHop(int i)
        {
            if(DivisibleBy3And5(i))
            {
                Console.WriteLine(DIV_3_AND_5);
                return;
            }

            if (DivisibleBy5(i))
            {
                Console.WriteLine(DIV_5);
                return;
            }

            if (DivisibleBy3(i))
            {
                Console.WriteLine(DIV_3);
                return;
            }
        }

        private static bool DivisibleBy5(int i)
        {
            return i % 5 == 0;
        }

        private static bool DivisibleBy3(int i)
        {
            return i % 3 == 0;
        }

        private static bool DivisibleBy3And5(int i)
        {
            return DivisibleBy5(i) && DivisibleBy3(i);
        }

        private static int GetMaximumIntFrom(string[] args)
        {
            // Add File Read Code here.....
            return 15;
        }
    }
}

Thursday, August 11, 2011

IIS 7.5, ASP.NET and log4net FileAppender lock error

Logging is a big part of any application.  In a recent MVC3 web application, I was having trouble getting my log4net configuration to log successfully to a file.  At first I had no idea why nothing was logging.  The symptoms I saw were that the destination file was being created, but nothing was getting logged to it.

Step 1:  Turn on log4net internal debugging.
Visit the Log4net FAQ and look under the troubleshooting section.  There you will find a faq on turning on the log4net internal debugging feature.  Basically it uses the windows trace system and then logs those trace alerts to a file.

Step2: Inspect the debugging
After taking a look at the debugging output, I noticed the following error.

log4net:ERROR [RollingFileAppender] Unable to acquire lock on file xxxxx. Access to the path 'xxxxx' is denied.

At least now I had a place to look.

Solution
There are two things that I had to do to make this solution work.
1)  In a web environment, there could be multiple threads trying to write to the logging file at once (depending on how you have things setup).  By default, log4net tries to acquire an exclusive lock to the files it is trying to write to.  You can override this default behaviour by telling log4net to use minimalistic locking.  You can find out more information in the faq by looking for "How do I get multiple process to log to the same file".

In any event, the configuration you want to add to your fileappender is

<lockingModel type="log4net.Appender.FileAppender+MinimalLock" />

2) Make sure the application pool you are trying to use has file permission access to write to that file.  In IIS 7.5, you can setup your application pools to use application pool identities, instead of networkservice or another named account.  You can find more information about this here.  Basically, an application pool identity is kind of a "virtual account".  In order to give access to the file, you have to give the user "IIS AppPool\<AppPoolName>" permission to write to that file.

Happy logging!

Thursday, August 4, 2011

IIS 7.5 and .NET Security: Part 1 - Security through Obscurity

There are several different security features that you are going to want to use to protect your web server. One of the techniques that you should use is security through obscurity. I want to be very clear, this is only ONE aspect of security.  It should definitely not be the only thing you do.  I further want to stress that this won't be very good defense against a targeted attack.  At best, you will be able to fool a large percentage of the script kiddies who are only looking for the easy scores.


The first step in any attack is recon. Anything we can do to misguide or delay this step pays dividends later on.  If you can make a tool think that your IIS server is an apache server, great.  It just means the results the attacker will get will be bogus.

Most recon involves both OS and webserver fingerprinting.  I will only focus on webserver fingerprinting. Most techniques focus on a few basic things.

1)  Extension of the page being served.  Example: .aspx
2)  Server headers.  By default IIS will claim that it is IIS
3)  Session tokens.  For example: jsessionid is almost always a java application, which helps narrow down the field of web servers.

There are lots of articles on the interwebs about (1), so I will skip it here.

2)  One of the easiest ways to change the server headers is to install webknights.  Among the many features that webknights has to offer, changing the server headers is one of them. 

Doing this is quite simple.  In the webknights configuration file there is a "Headers" section.  You can set the server header value to anything you want it to be.  A good one is something like "Apache/2.0.64".

3)  You need to change the cookie name used to store the session id.  By default it is something like ASP.NET_SessionId.  If that doesn't say hack me, I'm not sure what does.  The SessionState Element inside the webconfig allows for an optional parameter called cookieName that will allow you to change the name used.  I suggest you change it.  You can use something like Id, or ApplicationId, or WebsiteId, or something else really really generic.  If you want to continue trying to mimic an application running on apache, you can make your attackers salivate and change the name to phpsessionid. 

Once again, this is just a couple of tricks you can use to try and fool some script kiddies.  Most of this stuff will delay a targeted attack at best.

Saturday, July 30, 2011

Course Review: ITIL v3 Foundations

The company that I work for is in a transitions phase.  One of the big things that they are trying to do is implement more ITIL processes inside of IT.  With this, they have sent off the permanent IT staff to ITIL v3 foundations training.

ITIL is a buzz word in the industry and has been since v3 was released (2007?).  Those companies that practice in the art form known as "certification shopping" have long weeded out any resumes that do not contain the precious ITIL buzz word.

I quite enjoyed the course, although it was very high level and dry.  Our instructor was exceptional, however, which made for some interesting discussions in this 3 day course.  I view ITIL training (at least at the foundations level) more a training on language than any specifics in the IT industry.  The foundations course main focus is to establish a baseline of knowledge and terms that can then be used by people to communicate effectively.  If you don't know the ITIL definition for a function, for example, you would be pretty lost in a conversation about it.  Furthermore, you'd probably find it quite hard to get a good idea of your job role if it was described solely in ITIL terms.

In any event, it was a course worth taking.  I definitely learned something out of it, even if it was just the terms ITIL uses to describe the best practices in use today.

Sunday, July 10, 2011

HackThisSite - Realistic 6 Solution

It has been a while since I last posted. The only excuse I have is that it is "summer".

In any event, I decided to do a nice simple one to get back into things.

The "goal" behind hts 6 is to actually code the solution. As a penetration tester, you need to be able to write your own scripts to accomplish very specific tasks. You an find a lot of info on XECryption from here or here.  So I'm not going to go through the gory details about how to "crack" XECryption.

This is a good opportunity to pick a language that you do not know well in order to practice it.  I picked c# because I wanted to do more with LINQ.

In any event, here is my solution.  Pass in the full path to the file you wish to decrypt.  Pipe the output to a file.  Send the output (minus the first line saying it found the key) to ToxiCo_Watch.

using System;
using System.Collections.Generic;
using System.Linq;
using System.IO;
using System.Text.RegularExpressions;

namespace XECryptionDecrypter
{
    class Program
    {
        static void Main(string[] args)
        {
            var encryptedFile = ParseArguments(args);
            ProcessFile(encryptedFile);
        }

        private static void ProcessFile(FileInfo encryptedFile)
        {
            var encryptedContents = File.ReadAllText(encryptedFile.FullName);
            var chunks = GetChunks(encryptedContents);
            var key = DeterminePasswordKey(chunks);
            Console.WriteLine("Found key: " + key);

            foreach (var chunk in chunks)
            {
                PrintDecryptedChunk(chunk,key);
            }

        }

        private static int DeterminePasswordKey(List<ChunkedNumber> chunks)
        {
            var mostCommonChunkedValue = chunks.GroupBy(x => x.Total).OrderByDescending(x => x.Count()).First();
            var key = mostCommonChunkedValue.First().Total - 32;
            return key;
        }

        private static void PrintDecryptedChunk(ChunkedNumber number, int key)
        {
            Console.Write(char.ConvertFromUtf32(number.Total - key));
        }

        private static List<ChunkedNumber> GetChunks(string encryptedContents)
        {
            var encryptedContentsWithoutReturns = encryptedContents.Replace(Environment.NewLine, "");
            var chunkPattern = @"(\.\d+\.\d+\.\d+)";
            var chunkRegex = new Regex(chunkPattern);
            return (from match in chunkRegex.Split(encryptedContentsWithoutReturns) where !String.IsNullOrWhiteSpace(match) select new ChunkedNumber(match)).ToList();
        }


        private static FileInfo ParseArguments(string[] args)
        {
            if (args.Length < 1)
            {
                throw new ArgumentException("Please provide decrypted file path as argument 1");
            }

            var filePath = args[0];

            if (!String.IsNullOrWhiteSpace(filePath) && File.Exists(filePath))
            {
                return new FileInfo(filePath);
            }

            throw new ArgumentException(string.Format("File with path {0} not found",filePath));
        }
    }

    class ChunkedNumber
    {
        public int Number1 { get; private set; }
        public int Number2 { get; private set; }
        public int Number3 { get; private set; }
        public int Total { get; private set; }
        public string Raw { get; private set; }

        public ChunkedNumber(string raw)
        {
            var formattedString = raw.Replace(".", ",").Substring(1);
            var rawNumbers = formattedString.Split(',');
            Raw = raw;
            Number1 = int.Parse(rawNumbers[0]);
            Number2 = int.Parse(rawNumbers[1]);
            Number3 = int.Parse(rawNumbers[2]);
            Total = Number1 + Number2 + Number3;
        }
    }
}

Saturday, June 11, 2011

MVC3: Basic JQGrid Example

JQGrid seems to be one of the most fully featured grids in JQuery land.  Further to this, it seems like MS is putting some money into the development of features for this particular grid.  The following is a basic example of how to get JQGrid running with MVC3.

The scenario is as follows.  Lets say you have a small amount of data (say less than 500 rows) and you would like to display this to your user.  The easiest way to do this is to load all the data client side and then let jqgrid take care of the rest.  The client-side sorting and filtering will be really quick as it will not require another server call.  This will also reduce load on your server.

JQuery requires that the response format be very specific.  From the examples, it looks something like this.

total: 1,
page: 1,
records: 20,
rows: [ {id:1 cell:[data1 data2]} ... ]

You can customize the names of the fields with the jsonreader property, but the concept is still the same.

With the row number being pretty low and no need for server side interaction (read save/update) we can load all the data directly into the grid via the model.  Here is some sample code to do that.

public class HomeController : Controller
    {
        public ActionResult Index()
        {
            var model = new IndexModel();
            model.Data = new JavaScriptSerializer().Serialize(CreateGridData(100,100));
            return View(model);
        }

        private dynamic CreateGridData(int count, int rows)
        {
            var totalPages = Math.Ceiling((double)count / rows);
            return new
            {
                total = Convert.ToInt32(totalPages),
                page = 1,
                records = count,
                rows = CreateGridItems(count)
            };
        }


        private dynamic CreateGridItems(int count)
        {
            var gridItems = Enumerable.Range(0, count).Select(x => new GridItem() { Name = "Name" + x, Number = "Number" + x }).ToArray();
            var results = new List<object>();
            foreach (var gridItem in gridItems)
            {
                results.Add(new
                {
                    id = gridItem.Name,
                    cell = new []{gridItem.Name,gridItem.Number}
                });
            }
            return results;
        }

    }


The model is quite simple:

public class IndexModel
    {
        public dynamic Data { get; set; }
    }

And the JS code (just a very basic grid, you will probably want more options than this)

$(document).ready(function () {
        $("#simpleGrid").jqGrid({
            datastr: '@Html.Raw(@Model.Data)',
            datatype: 'jsonstring',
            colNames: ['Name', 'Number'],
            colModel: [
                { name: 'id' },
                { name: 'Number' }
            ],
            rowNum:100,
            height: '100%'
        });

    });

The JavaScript serializer converts our object into proper JSON.  MVC3 has some built in XSS projection, and by default, will escape out the data.  You can get around this by using the @Html.Raw feature.  See MVC3 Xss. You now have a jqgrid with all the data loaded.  You can add paging/sorting/filtering that will all work client-side.

Course Review: Developing Application for the Java EE Platform

Last year, I was still doing a lot of java stuff at my current position.  Most of my Java knowledge (as I'm sure is true of most java developers) had been organically grown via many online tutorials, blog posts, tool documentation, and looking at old code.  The latter is probably one of the best and worst ways to learn a language.

I really struggled in trying to understand how all the different java components fit together.  Sure, I could create a web.xml, but how did it all really work?  I wanted to learn more about the specifications that drive the Java platform.  With this goal in mind, I took the course Developing Application for the Java EE Platform from Oracle university. 

The course had many cons, but I did learn a thing or two.

Cons:
1)  The electronic voice that you hear while navigating through this course is truly brutal. You learn very quickly that there is a transcript button, where you can read exactly what the "voice" is saying.  There is also a mute button, a must find in the first couple of minutes.  I swear that Oracle should pay a dollar for every time you have to hear " or navigate by using the tab and spacebar keys".

2) No course notes.  I really really like going back later on and reviewing course notes, especially for reference.  With the SANS courses, you get course notes with all the slides and some of the "transcript".  Really helpful for when you are trying to master skills later on.

3)  Very high level.  The course was designed as more of an intro course, so I knew what I was getting into.  I found the course didn't really focus on how to do things, or common ways a developer would take advantage of the various (and I mean various) Java specifications.  For example, it is great to know that JAXB is an architecture for xml binding, but what do you do with it? What are some common implementations?

4)  Session timeout.  The player would still function (if say, you were within a module and just clicking the next button) even if you had your session timed out.  This was great as I could start a section and then leave to go do something.  I would come back to find the player still functional and assume that my session was still valid.  I would click through the rest of a module and finish only to find that my session was timed out and that I would have to redo everything I just did.  It was pretty frustrating because of the next point.

5)  Complete means clicking through every page.  Yawn.

Pros:
1)  Lots of good information from the "source".  It is one thing to read some tutorial on how to do things, and another to actually read about what the specification is actually supposed to do.

Overall, I learned a fair bit from this course, even if it was just solidifying what I already knew.  I'm not sure I would take another oracle course as the delivery format was really hard for me to deal with.

Monday, May 30, 2011

MVC3, Client Side Validation, and dynamically loaded forms

Recently, I was charged with creating a user edit screen.  The user edit screen was to have 4 different "areas".

1) User Details
2) Actions (disable.. enable)
3) User Roles
4) Audit

We decided to ajax out all the different sections as to make the website more responsive.  If someone disabled an account, all we would have to do is update sections 1 and section 2.  No need to make an expensive call to try and establish all the roles the user has in the system.

The problem I has was that when I loaded the form using jquery get method, the client side validation would not work.  This made it a pain for getting proper error messages back to the client.  I didn't want to have to parse a result and start using my javascript skillz to display messages all over the place.  In fact, the built-in client side validation "features" that MVC3 provides are quite nice.  You define validator logic in one place and go from there....

In any event, the point is, when you dynamically load a form, the client side validation does not work out of the bat. To answer why, we have to look at the jquery.validate.unobtrusive.js libraries that MS provided for us. 


$(function () {
        $jQval.unobtrusive.parse(document);
    });

Great piece of code.  The problem is, this only runs once... when the original document is loaded (unless you include all of your JS again in the AJAX layout you are using.. but that would be a lot of duplication). 

The solution is quite simple.  All you have to do is add a call to the parsing mechanism with the jquery selector for the form you wish to validate.

$.get(
                '@Url.Action("GetPersonalData")',
                {
                    Id: currentUserId
                },
                function (data) {
                    $("#personal").html(data);
                    $.validator.unobtrusive.parse("#personal form:first");
                    ... continue ...

Voila, you will now have client side validation on your dynamically loaded form.

Update:
If you want this form to submit via an ajax post (using jquery natively) you have to determine if the form is valid first.  You can find out more here.


References:
http://stackoverflow.com/questions/4406291/jquery-validate-unobtrusive-not-working-with-dynamic-injected-elements
http://bradwilson.typepad.com/blog/2010/10/mvc3-unobtrusive-validation.html

Saturday, April 30, 2011

Book Review: The Agile Samurai

I really believe that being well-rounded is the key to success in the development world (or in any world for that matter).  I have been in and around Agile methodologies for a while now, but have never really spent the time to read up a book about how the pros do it.  This book, although not really an introduction, covers a lot of the basics.  In addition to that, it gives some useful tips on how Agile is really done in the field.  It covers some concepts to help deal with unreasonable (read stupid) product owners.  Although probably meant more for project managers, all the information is good to know for any project members.

The pro to this book is the discussion about the inception deck.  I think that the one thing projects in general (software or not) miss is the fact of momentum.  Watch any sports game, and you will notice that momentum is everything.  You can either start your project off by setting it up to succeed, or start it off setting it up to fail.  Does it cost money? Of course.  Does it cost time? For sure... Do you accomplish great things with momentum on your side? You bet.

The book talks about the concept on an inception deck.  Awesome idea for any (read any) project.  Here is a summary:

1) Asky why we are here
2) Create an elevator pitch
3) Design a product box
4) Create a NOT list
5) Meet your neighbors
6) Show the solution
7) Ask what keeps us up at night
8) Size it up
9) Be clear on what's going to give
10) Show what it's going to take

This list is a great way to start the project, make sure everyone knows what is going on, and hit the ground running.  In my current project, I was the resident "expert" on the existing system.  I know that had we started off creating this inception deck, more people would have been on board quicker.  The project team would have had a better idea of what they are trying to deliver.

The con to this book was it's focus on the people aspect of projects and specifically agile projects.  I have another post in the works that will better describe what I am talking about, but in Agile projects, you really need good, well-rounded people on your team.  Go getters.  People who are willing to step out of their comfort zone to get the job done.  Another thing is about how to deal with the client.  At the end of the day, presentation and ethos are the main factors in how a project is received by the product owner.  In my most recent project, we had an incident in our first demo.  The BA who was showing off our work was quick to point out that the the website "looks" were not up to par.  It turned out near the end of the demo, that the customer actually "liked" what we had done so far.  Of course there was room for improvement, but it wasn't all bad.  However, after that meeting, the project team was on the look for a design company to work on the UI for the project.  Not the wrong decision by any means, but the presentation could have been done differently.  We could have asked opinions, rather than forcing ours onto our client. 

In any event, this was a great read to get some basic + advanced knowledge of the Agile process.  I'd recommend it for anyone who wants to get a better understanding of how things should work in Agile projects.  I'd also recommend it for any project managers out there running Agile projects.

MVC3 Validation Oddities

I had created a post to demonstrate an example for how to do validation in MVC 3. I worked off of an assumption that is now proving to be false.

1) All validation attributes are run, even if one of them fails.

My initial assumption was that if one of the validation attributes were to fail (say Required) that no other attributes would run. In my post, I didn't add a null check to my custom validator based on this assumption. In fact, according to the debugger, all validation attributes are run.

I decided to play around with this a bit to see what the logic was behind this. My colleague mentioned that when you validate, you would probably want to get all the errors back at once, as opposed to only one. This made sense at the time, but was still a little big confusing. Obviously, if you enter a null value for something like username, you probably only want the one error message (User name is required). I can understand where my colleague is coming from , however. Lets say that the username is not null. You would probably want an error message if the length is too small, or doesn't contain the correct characters.

The first thing I tried was to add two custom validators, and see what would happen. It turns out that although both attributes are run, only the error message from one is actually displayed.

[Address]
        [Address2]
        public string Address { get; set; }

It seems (at least for me) that Address2 was the error message that always came up. It did not matter on ordering. I found that one of the methods you can override is IsDefaultAttribute. I found that even setting that didn't change which message gets sent back to the client.


public class AddressAttribute : ValidationAttribute
    {
        public AddressAttribute()
        {
            ErrorMessage = "Address attribute failed";
        }

        public override bool IsDefaultAttribute()
        {
            return true;
        }

        public override bool IsValid(object value)
        {
           
            var address = value as string;

            return false;
        }
    }

I even tried this using the built in validators that come with MVC2. Same result, only one error message made it back to the form.

I know for a fact that when client side validation is turned on, you seem to get all the error messages displayed back. With that being said, it is interesting that you don't get all the error messages displayed if you are just doing simple, no js dependent, error checking. What is also interesting, is that you have to do things like a null check in all of your custom validators, otherwise they will blow up.

I decided as one last final step, to go take a look at some of the source code to see if I could make sense of all of this. Without resharper (or a a full VS at home) it was pretty hard to piece things together. What I did find was the RegularExpressionValidator in Sys.Mvc.

2) Included Validators seem to be dependent on each other.

I'm not sure where this is used, but the implementation of this seems to depend on the the RequiredValidator. A quick test of this yields that a null will pass through the regular expression attribute no problem!

namespace Sys.Mvc {
    using System;

    public sealed class RegularExpressionValidator {

        private readonly string _pattern;

        public RegularExpressionValidator(string pattern) {
            _pattern = pattern;
        }

        public static Validator Create(JsonValidationRule rule) {
            string pattern = (string)rule.ValidationParameters["pattern"];
            return new RegularExpressionValidator(pattern).Validate;
        }

        public object Validate(string value, ValidationContext context) {
            if (ValidationUtil.StringIsNullOrEmpty(value)) {
                return true; // let the RequiredValidator handle this case
            }

            RegularExpression regExp = new RegularExpression(_pattern);
            string[] matches = regExp.Exec(value);
            return (!ValidationUtil.ArrayIsNullOrEmpty(matches) && matches[0].Length == value.Length);
        }

    }
}

I'm sorry this post was a little bit of a ramble, I guess I am just confused on the implementation of validation in MVC 3.

Sunday, April 24, 2011

HackThisSite - Realistic 5 Solution

This was a pretty fun exercise, albeit pretty simple.

1) Recon
The first thing you should always do is have a look around. With HackThisSite, it is very very very important that you read the descriptions. You'll note a couple things.

- "Everything they use is 10 years old"
- "new password seems to be a 'message digest'"

With that, you should have a look around. You will notice that you have a lot of email addresses on the pages. These are good to keep in case you need to start guessing usernames (you don't, but just saying). On the news page you will notice something about google finding links that it shouldn't. Immediately, you should think to take a look at the robots.txt file.

In the robots.txt file, you will notice a few directories that they don't want you looking into.

2) Discovery

You will start poking around in the directories that you found in the robots.txt file. In there, you will find copies of php files, etc. Start clicking on them. You will notice that one of them displays a hash that it is trying to match. If you take a look at the lib directory, you will have access to a "hash" library. You can download that and just open it up in notepad. You will notice from it the hashing algorithm to use to try and break the password.

3) Exploitation

Using a program like mdcrack, you will be able to very very very quickly get a collision that produces the correct hash (really that is all you need, the original password is irrelevant).

Enter said password into the database page, and you have completed level 5!

Monday, April 18, 2011

c#: Custom Validation Example

Validation is a big thing for any application.  I am currently working on implementing validation in my WCF layer that doesn't rely on me checking each operation and then hardcoding validation logic into a gigantic class.  I really like the way WCF does model validation, so I decided to start trying to reproduce it to figure out how they did it.

It turns out that it is pretty simple as the following code suggests.  I created a base attribute called Validation Attribute.  I created a BaseModel that has a validate method on it.  That method does some simple reflection to get all the attributes for all the properties in the given class.  It just calls IsValid on those attributes to run the specified logic. 

Check it out.


    public class BaseModel
    {
        public bool Validate()
        {
            var result = true;

            foreach (var property in this.GetType().GetProperties())
            {
                foreach (var attribute in property.GetCustomAttributes(true))
                {
                    if (attribute is ValidationAttribute)
                    {
                        try
                        {
                            var attr = (ValidationAttribute)attribute;
                            result = result && attr.IsValid(property.GetValue(this, null));
                        }
                        catch (Exception)
                        {
                            result = false;
                        }
                    }
                }
            }

            return result;
        }
    }


    public class LoginModel : BaseModel
    {
        [UserNameValidation]
        public string UserName { get; set; }
    }

    [System.AttributeUsage(System.AttributeTargets.Property)]
    public class ValidationAttribute : Attribute
    {
        public virtual bool IsValid(object obj)
        {
            return false;
        }
    }

    public class UserNameValidationAttribute : ValidationAttribute
    {
        private const string USER_NAME_REGEX = @"^\w+$";

        public override bool IsValid(object obj)
        {
            if (obj == null)
            {
                return false;
            }

            var value = obj as string;
            
            if (Regex.IsMatch(value, USER_NAME_REGEX))
            {
                return true;
            }

            return false;
        }
    }

    class Program
    {
        static void Main(string[] args)
        {
            var loginModel = new LoginModel();
            loginModel.UserName = "UserName";
            Console.WriteLine(loginModel.Validate());
        }
    }

Monday, April 11, 2011

HackThisSite - Realistic 4 Solution

This was a pretty fun little exercise.  It really got me thinking about where my skill set is these days.  It is one thing to know that SQL Injection exists, and another to be really good at crafting queries to use it to your advantage.  Here are the steps to follow to solve this one.

1)  Recon
In the recon phase, you must do the basic things.  Take a look at the source.  Click around on the links and see where this takes you.  In real life you would spider, but for this you will have to just see what you can see.  What you will notice is that there are two "input points".  The first is a small form asking for you to enter your email.  The second is the link to the products pages.  If you notice you have a products.php?category=1.  As you know, any input can be fuzzed.


2)  Discovery
In this phase, we start trying to play around to see what we can see.  The first step is to work at the email form.  There are many places this email could be being stored.  Most commonly, however, it is stored into a sql database.  A quick sql injection attempt here will yield some interesting results.  The developers of the site have not bothered to mask their error messages (quite common in real life) and you get that the name of the email table is email.  Clearly the sql injection attempt is being blocked.  There is no way to do blind sql injection at this point since we don't have a way to view the information. (Yes you could try pinging and stuff, but this is just a test).


The next step is to see if the other inputs on the page have any problems. A quick sql injection attack via firebug yields that there is no sql protection on link to the product pages.


products.php?category=1 or 1=1


Produces a page with all products on it.  Further more, if you put in a sql statement that generates an error, you get a nice little blank page.


3)  Exploitation
Now that we know the basics of what we can do, it is time to exploit it.  Sql has a command that is called Union All.  Basically, this command allows you to combine the results from two select statements.  The key is that the column numbers have to match.  By looking at the product page, you can try and guess how many columns are being returned in the original query.  There seems to be a link to an image, a description, and a price.  There is probably also an id of some kind.  That makes 4.

Since description is the field that seems to print out a string, we will use it for our query.

products.php?category=1 UNION ALL SELECT null, *, null, null FROM email;

The only reason why this works is because * means everything, and email probably only has 1 column.  At least that is my understanding of the above sql query.

Running that produces a list of all the email addresses currently in the system.  To finish off the challenge, use the HTS Message Center to send the list to SaveTheWhales.

Enjoy.
 

Sunday, April 10, 2011

Self Updating WCF Service

Recently I was in a meeting where my idea of a self-updating WCF service was laughed at. This post will contain a very rough proof of concept that the idea is possible. The solution is pretty simple.

1) Self hosted WCF service has a file watcher looking for a file to be created.
2) WCF service has an "update" method where it creates a file to trigger an update
3) Self hosted service reloads the new assembly by way of reflection.

I have skipped over a lot of the details in this proof of concept. Things to look at would be:

1) How does the update file get there (could be via WCF Service)
2) File watcher is not the only way to do this. Maybe nServiceBus? I would have to look into it more.
3) DLL Needs to be verified somehow before it is just loaded automatically. Could be done via signed dll's, or another method? Open to suggestions.

Anyways, here is the code.

WCF Service
using System;
using System.Collections.Generic;
using System.Linq;
using System.Runtime.Serialization;
using System.ServiceModel;
using System.ServiceModel.Web;
using System.Text;
using System.IO;

namespace WCFSelfUpdater
{
    public class Service1 : IService1
    {
        private string versionNumber = "1.1";

        /// 
        /// Version number in original program is 1.0.  Changed to 1.1
        /// If you want to reproduce this, you need to create 2 dll's each with a different 
        /// version number.
        /// 
        /// 
        public string GetVersionNumber()
        {
            return versionNumber;
        }

        /// 
        /// Method could write the stream out to the disk and then move it (atomic operation)
        /// to where it needs to be.  In this case I just skipped over all of the implementation'
        /// details to just show that the idea was possible.
        /// 
        ///         /// 
        public bool Update(Stream stream)
        {
            try
            {
                using (TextWriter writer = new StreamWriter(@"c:\temp\shamir.dll"))
                {
                    writer.WriteLine("PLEASE UPDATE");
                }
            }
            catch (Exception e)
            {
                Console.WriteLine("ERROR" + e.Message);
            }

            return false;

        }
    }
}


Self Hosting Command Line Application
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.ServiceModel;
using System.ServiceModel.Description;
using System.Reflection;
using System.IO;
using System.Threading;

namespace WCFSelfUpdaterHost
{
    class Program
    {
        static bool NeedsUpdate = false;

        static void Main(string[] args)
        {
            try
            {
                //Step 1, create a file watcher to check for "update.dll" to be placed
                FileSystemWatcher watcher = new FileSystemWatcher(@"c:\temp","shamir.dll");
                watcher.EnableRaisingEvents = true;
                watcher.Changed += new FileSystemEventHandler(watcher_Changed);
                Uri baseAddress = new Uri("http://localhost:9999/hello");
                
                // initial setup of WCF service with version 1.0
                Assembly assem = Assembly.LoadFile(@"D:\work\WCFSelfUpdater\WCFSelfUpdater\bin\WCFSelfUpdater.dll");
                Type serviceType = assem.GetType("WCFSelfUpdater.Service1");

                bool done = false;

                // CAUTION::: Done is never set to true.  I ran this in debug mode so didn't have a problem!!!!
                while (!done)
                {
                    var result = RunService(serviceType, baseAddress);
                    if (result == ResultCode.NeedsUpdate)
                    {
                        // Load new service
                        assem = Assembly.LoadFile(@"D:\work\WCFSelfUpdater\WCFSelfUpdater\bin\WCFSelfUpdater1.dll");
                        serviceType = assem.GetType("WCFSelfUpdater.Service1");

                        // This code simple checks to make sure the version changed.
                        var obj = Activator.CreateInstance(serviceType);
                        MethodInfo me = serviceType.GetMethod("GetVersionNumber");
                        var result1 = me.Invoke(obj, null);
                        Console.WriteLine("NEW VERSION NUMBER:" + result1);

                        // Need this to keep loop below going
                        NeedsUpdate = false;
                    }
                    else
                    {
                        done = true;
                    }
                }
            }
            catch (Exception e)
            {
                Console.WriteLine("Error received.  Enter to exit." + e.Message, e);
                Console.ReadLine();
            }

            Console.WriteLine("Done execution: enter to exit");
            Console.ReadLine();
        }


        static void watcher_Changed(object sender, FileSystemEventArgs e)
        {
            NeedsUpdate = true;
        }


        static ResultCode RunService(Type type, Uri baseAddress)
        {
            using (ServiceHost host = new ServiceHost(type, baseAddress))
            {
                ServiceMetadataBehavior smb = new ServiceMetadataBehavior();
                smb.HttpGetEnabled = true;
                smb.MetadataExporter.PolicyVersion = PolicyVersion.Policy15;
                host.Description.Behaviors.Add(smb);

                host.Open();

                Console.WriteLine("Enter to stop");
                while (!NeedsUpdate) { Thread.Sleep(100); }

                host.Close();
                return ResultCode.NeedsUpdate;
            }
        }
    }

    enum ResultCode
    {
        Done,
        NeedsUpdate
    }
}


Calling application
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.IO;
using System.Threading;

namespace WCFClient
{
    class Program
    {
        static void Main(string[] args)
        {
            var client = new ServiceReference1.Service1Client();
            try
            {

                StreamReader reader = new StreamReader(@"c:\temp\WCFSelfUpdater.dll");
                

                client.Open();
                Console.WriteLine(client.GetVersionNumber());
                // doesn't matter what file is being read in as it isn't currently
                // used in the WCF Service
                Console.WriteLine(client.Update(reader.BaseStream));
                // Sleeping to allow time to reload the wcf service.
                Thread.Sleep(10000);
                Console.WriteLine(client.GetVersionNumber());
                Console.ReadLine();
            }
            catch (Exception e)
            {
                Console.WriteLine("Error received.  Enter to exit" + e.Message, e);
                Console.ReadLine();
            }
            finally
            {
                client.Close();
            }
        }
    }
}



In any event, this proof of concept is pretty rough. There is little in the way of security, etc implemented. It is good to know that the idea is possible, and only took about an hour to research and implement.

Resources used:
http://dranaxum.wordpress.com/2008/02/25/dynamic-load-net-dll-files-creating-a-plug-in-system-c/
http://www.c-sharpcorner.com/UploadFile/mokhtarb2005/FSWatcherMB12052005063103AM/FSWatcherMB.aspx
http://www.csharp-examples.net/reflection-examples/

Thursday, April 7, 2011

Adventures in LINQ - Part 2

For this adventure, I have decided to try and simulate an outer join.  Linq does have join method, however, this method by default does an inner join.  There are many many applications where you would want to do an outer join. More specifically, I am doing a left outer join.

On the surface, the solution located at Hooked on LINQ seems a fair bit complex.  I have come up with my own solution instead.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

namespace LinqOutterJoin
{
    class Program
    {
        static void Main(string[] args)
        {
            var element1 = new Element() { Id = 1, Name = "Element1"};
            var element2 = new Element() { Id = 2, Name = "Element2"};
            var element3 = new Element() { Id = 2, Name = "Element3"};
            var element4 = new Element() { Id = 4, Name = "Element4"};
            var element1a = new Element() { Id = 1, Name = "Element1a"};


            var elementList1 = new List { element1, element2, element3 };
            var elementList2 = new List { element1a, element4 };

            elementList1 = elementList1.Concat(elementList2.Except(elementList1)).ToList();

            elementList1.ForEach(x => Console.WriteLine(x));

        }
    }

    public class Element
    {
        public int Id { get; set; }
        public string Name { get; set; }

        public override bool Equals(object obj)
        {
            var newObject = obj as Element;

            return this.Id == newObject.Id;
        }

        public override string ToString()
        {
            return this.Id.ToString() + " " + this.Name.ToString();
        }

        public override int GetHashCode()
        {
            return this.Id.GetHashCode();
        }
    }
}
Basically my solution makes use of the Concat and the Except methods.  Concat is pretty simple.  It takes list 2 and adds it to the end of list 1.  The Except method goes through all elements in list 1 and only returns them if they do not exist in list 2.  Sounds like an outer join to me!

Monday, April 4, 2011

MVC3 XSS Protection

I am focusing quite a bit on security in my current project, and so I decided to spend a little time working with the default xss projection in MVC3.

On the surface, it seems as if the default protection against XSS is quite robust in MVC3.  I started off by creating a simple form that took a message input.  This message was added to the ViewData and passed back to another view where it was displayed.  I tried the most basic of XSS attacks at got a nice little error message saying

Server Error in '/' Application.

A potentially dangerous Request.Form value was detected from the client (message="<script>alert("hello...")



Pretty good eh?  Now of course you would want to have a custom error page all set up, but this is quite nice protection to have right out of the box.  MVC3 includes a ValidatinInputAttribute which you can set to false to disable the input validation.  You can set this attribute only on the method or class level, so make sure you know what you are doing.


[ValidateInput(false)]
        public ActionResult Display(string message)
        {
            .....
        }

Adding this attribute on to my method got rid of the nasty error message received before.  I proceeded to add this message directly to the ViewData and outputted it directly on the screen.  To my amazement, it printed the input with the proper escaping!  WOW!  Microsoft finally got something right.  I started to ask myself the question, what if I actually wanted to display HTML on the screen, say for example, rich text input.  It turns out that you have to do a combination of the following.

In you model input, you have to use the AllowHtmlAttribute.  This attribute will ensure that you won't get a nasty message when input validation occurs without having to disable all field validation for that method or class.  This also allows you to still do some sanity checks on the data you are receiving.  In order to get this html to display properly, you have to use the Html.Raw method to output the data.


public class DisplayModel
    {
        [AllowHtml]
        public string Message { get; set; }
    }

Hello, @Html.Raw(Model.Message)

It is good to see that it MVC3 is trying to do protection by default.

Sunday, April 3, 2011

Book Review: Don't Make Me Think 2nd Edition

On my current project, I have been forced to wear many hats. One of those hats is that of lead designer. Sure, putting divs on the page is easy. But how do you actually make something look good? How do you make it usable? How do you design a user experience?

I would call this book a good intro to the world of design. The author sets the expectations of this book at the beginning. It is not an all inclusive book (I don't think one is in existence in the world of design). It is a short primer to get the reader up to speed on some of the biggest design flaws. As the author quotes in his first few pages,
"You don't need to know everything. As with any field, there is a lot that you could learn about usability. But unless you are a usability professional, there is a limit to how much is useful to learn."

I would say that this book is a good intro.  It will give you the skills necessary to take a critical look at your websites design.  It will give you ideas to tweak the design.  It will give you the skills (say if you are a hireing manager) to take a good look at work being present to you.  Most of all, it will force you to think objectively about web usability and design so that you can make better decisions.

I recommend this book to anyone wanting to get a start in web usability.

Friday, April 1, 2011

Validation in MVC3: An Example

[[Update]]
I had to add a null check to my custom validator. You can find out more information in this post.


Validation is a big aspect of security in web applications.  I can't count how many times I have seen blatant ignorance of this simple fact.  Just recently I was browsing an application built by a 3rd party (who probably charged an arm and a leg for their product).  It took about 1 minute to find a huge sql injection flaw in their application.  One of the get parameters that was being passed into their application was being put directly into a database call.  Worse than this, I got an error message telling me that my sql statement had not worked.  This error message told me the following pieces of information.

1)  It returned the actual sql call
2)  It told me the database that it was using along with the version
3)  It told me the web framework that was being used.

The security industry has spent a lot of time trying to educate developers on best practices for building secure applications.  It is unfortunate to see people still do this kinda of sloppy work.  I guess there is a reason why injection attacks are still on the OWASP top 10 list.

In this article I am going to go over making a custom user name validation attribute.

User names are part of most web applications these days.  I want you to note that there are other perfectly valid ways to do what I am doing here.  I have chosen to make a custom attribute because I assume that I will be using this user name validation in other parts of my application.  Following the DRY principals, it is best to create a custom attribute and go from there.


using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.Mvc;
using System.ComponentModel.DataAnnotations;
using System.Text.RegularExpressions;

namespace MvcApplication2.Attributes
{
    public class UserNameAttribute : ValidationAttribute 
    {
        private const string WHITE_LIST_REGEX = @"^[a-zA-Z0-9]*$";
        private const int MIN_LENGTH = 5;
        private const int MAX_LENGTH = 25;

        public UserNameAttribute()
        {
            // Set a default error message that does not give any information away
            // We don't want an attacker to gain information as to how we build our user names
            // This is a good security measure in cases when the site is not open to the public
            // registration.
            ErrorMessage = "Please enter a valid user name.";
        }

        public override bool IsValid(object value)
        {
            if (value == null)
            {
               return false;
            }
            // Sanity check 1:  Is it a string?
            if (!(value is string))
            {
                return false;
            }

            var userName = value as string;

            // Sanity check 2:  Is it within acceptible norms?
            if (userName.Length < MIN_LENGTH ||
                userName.Length > MAX_LENGTH)
            {
                return false;
            }

            // White List Check
            if (Regex.IsMatch(userName, WHITE_LIST_REGEX))
            {
                return true;
            }

            return false;
        }
    }
}

In order to do proper input validation you have to follow the following rules.

1)  Input is always invalid by default
2)  Input should conform to a known whitelist
3)  Sanity checks should be done to ensure that you only operate on plausible values
4)  Information leakage should be avoided on unauthorized pages

As you can see from the above code, I return false by default.  I use a generic, standard error message to combat (4).  Building white lists are easy with the use of regular expressions.  You can validate almost any type of input.  In the case above, I use a business rule defined in my application to build my white list.  I know that my user names only have letters and numbers.  I can thus enforce this in a white list as shown in the code.  The last check is the one that tests for min and max length.  Although I should enforce this on the form, we all know that the any client side checking can be disabled very easily.  It is easy to build in a check here to make sure that the length of the user name provided meets known business rules.  I could have just as easily incorporated this into the same regular expression that did the character white list.  It would look something like

string fullRegex = @"^[a-zA-Z0-9]{5,15}$";

In this case, my LoginController contains a Login action that takes a LoginModel model.  It is easy to add the above attribute in the LoginModel to provide the necessary protection.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.ComponentModel.DataAnnotations;
using MvcApplication2.Attributes;

namespace MvcApplication2.Models
{
    public class LoginModel
    {
        [Required]
        [UserName]
        public string UserName { get; set; }
    }
}

Of course, security in layers is the best protection to use.  This example above is just one of the layers that you can use to protect your application.  Now that the input has passed some validation, you can use the user name supplied and check against your database to see if the user actually exists.

Happy validating!

Sunday, March 27, 2011

Course Review: SANS SEC542

Well it is that time again for another course review.  This time it is SANS SEC542: Web App Penetration Testing and Ethical Hacking.

I personally found this course super interesting.  I took it OnDemand via SANS (personally, the only way to do courses) and I greatly benefited from the insight provided by Kevin Johnson as well as extra time to do my own research into the tools and techniques mentioned in class.  The following is going to be a summary of the core concepts that I learned from this course.  If you have a chance to take it, I suggest you do.  The insight and lab environment provided by SANS proves to be an effective learning tool.

Day 1: Attackers view of the web
The real purpose of this day is the ensure that everyone taking the class has a similar baseline of knowledge.  This knowledge, of course, is what the rest of the course will build on.  What I particularly liked about this part of the course was not so much the overview of TCP and the different types of authentication (basic/digest...) but how all of this was summed up into how the attacker views the different mechanisms that are used in security today. 

The course goes into depth about the authentication techniques used today.  For example, in Today's web, session state is everything.  The course goes over the different ways that session state is persisted in a stateless protocol as HTTP is. 

The course also talks about the different types of "testing", and which types will accomplish what.  On a related note, I was recently asked a question about the difference between penetration testing and a security assessment.  At the time, I kind of flubed the answer, but after reviewing my notes I would have to take the following stance.  A penetration test is simply a matter of seeing "how far can you get".  You can only truly know the risk of a vulnerability if you have fully explored how far you can take it.  A security assessment, on the other hand, is more of a "try-and-find" type approach.  The goal is not to find out how far a certain vulnerability can go, but rather, to figure out where the holes are and plug them.

The most important part of day 1, in my opinion, was the discussion of the attack methodology used.  Since learning this methodology, I have learned that there are others than can be followed such as OSSTMM.  Really, the methodology here is quite simple.
1) Recon : Research the target
2) Mapping: understand the target and it's surroundings
3) Discovery: look for vulnerabilities
4) Exploitation: launch attacks!!!


Day 2: Recon and Mapping

Recon as defined by the course simply means to research the target.  In the day and age of things like Stackoverflow, recon has gotten more complex.  Kevin tells the story of one assessment that he was doing where he actually used google and message boards to determine vulnerabilities in code.  Coders like to post samples (especially when they have problems) on message boards looking for help.  Kevin repeatedly says that the only advantage a company has over a hacker is the fact that they have access to the code.  This really changed my mind on posting code samples on the internet.  Most of the techniques and tools involved in Recon are general stuff.  Just remember that you can use anything public facing to get an idea of what the company is using.  Take for example, job postings.

Once you have your targets to hit, you can begin the mapping phase.  Mapping generally goes in the following order:
1) Port Scan
2) OS Fingerprint & Version Scan
3) SSL analysis
4) Virtual hosting $ load balancer analysis
5) Software configuration Analysis
6) Spidering
7) Detailed Analysis.

Basically, you are trying to find out as much information about the target as possible.  All this information could be things you could use during the next phases.  For example, in (3) you could determine that the server allows for the NULL SSL key.  Which basically means data is sent in the clear.  You might be able to use this information during later phases.  Another interesting aspect of mapping is (5).  There are automated tools that you can use to help determine the configuration of applications running on the server.  There are many tools that one can use in this space.  They include
1) Nmap
2) P0F
3) HttpPrint
4) Nikto
5)WebScarab

One important step of mapping is to try and chart out the application itself.  What pages link to others, etc.

Day 3: Server-Side Vuln Discovery
Basically this step involves probing the server to try and determine weakness in the application.  The easiest way to accomplish this is by use of automated scanners.  One should never rely on automated scanners to do all the work, however. Most of the day talks about manual ways to do discovery.  Very very very interesting stuff.
1) w3af

Day 4: Client-Side Discovery
This day primarily focuses on the client side technologies used in modern day websites, and how one might be able to exploit them.  One example I can remember clearly was talking about an AJAX shopping cart.  General shopping carts have the following 4 steps.
1) Add an item
2) Subtotal
3) Charge credit card
4) Checkout.

Well what would happen if you ran all of these calls out of order?  Would it work?  You'd be surprised to know that not too long ago some major AJAX shopping carts had vulnerabilities like this. 
This day goes into depth about AJAX, Web services, XPATH Injection and more. 

Day 5: Exploitation
This day was by far the "fun" day out of the course.  Here they talk about bypassing authentication.  They talk about using your SQL Injection for bad.  They talk about making zombies of browsers on the internal networks, and then using them to continue your attack.  Really really neat stuff.  I'm not going to into detail here.

Overall this was a great course.  I think it has provided a solid foundation for me to build my skills on.  I recommend this course for all developers who want to know how attacks are really done.  Those people who say, so what ... sql injection... the database doesn't have anything useful on it, be warned... you are so wrong.

Tuesday, March 15, 2011

Adventures in LINQ - Part 1

The one thing that I am really enjoying about my current stint in c# is learning about and using LINQ.  This really seems like a powerful way to query objects.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

namespace LinqExamples.examples
{

    class LinqExample2 : IExample
    {
        private Role Role1 = new Role(){Id = 1,Name = "Role 1"};
        private Role Role2 = new Role(){Id = 2, Name = "Role 2"};
        private Role Role3 = new Role(){Id = 3, Name = "Role 3"};
        private User User1 = new User() { Id = 1, Name = "User 1" };
        private Account Account1 = new Account() { Id = 1, Name = "Account 1" };

        List&ltuserroleaccount> UserRoleAccountsList;
        List&ltrole> NewRoles;

        public LinqExample2() 
        {
            UserRoleAccountsList = new List&ltuserroleaccount>()
                                                                    {
                                                                        new UserRoleAccount() {
                                                                            Account = Account1,
                                                                            Roles = new List&ltrole>(){ Role1, Role2}
                                                                        }

                                                                    };
            NewRoles = new List&ltrole>() { Role1, Role3 };
        }

        public void ExecuteExample()
        {
            Console.WriteLine("Linq Example #2: Only adding new roles");
            Console.WriteLine("========================================================");

            var rolesToAdd = NewRoles.Except(UserRoleAccountsList.Where(x => x.Account.Id == Account1.Id).Select(x => x.Roles).FirstOrDefault());
            foreach (var item in rolesToAdd)
            {
                Console.WriteLine("Role to add: {0}", item.Name);
            }

            var rolesIntersect = NewRoles.Intersect(UserRoleAccountsList.Where(x => x.Account.Id == Account1.Id).Select(x => x.Roles).FirstOrDefault());
            foreach (var item in rolesIntersect)
            {
                Console.WriteLine("Role intersect {0}", item.Name);
            }

            var rolesUnion = NewRoles.Union(UserRoleAccountsList.Where(x => x.Account.Id == Account1.Id).Select(x => x.Roles).FirstOrDefault());
            foreach (var item in rolesUnion)
            {
                Console.WriteLine("Role union {0}", item.Name);
            }
        }
    }

    class UserRoleAccount
    {
        public List&ltrole> Roles { get; set; }
        public Account Account { get; set; }
    }

    class User
    {
        public string Name { get; set; }
        public long Id { get; set; }
    }

    class Role
    {
        public string Name { get; set;}
        public long Id { get; set; }
    }

    class Account
    {
        public string Name { get; set; }
        public long Id { get; set; }
    }
}


[Sorry about the encoding, hopefully I'll figure out a better way to display generics soon]


As you can see from the above example, it is very easy to compare the NewRoles object to the list of roles already set in the UserRoleAccount object.

1) Except:  displays all roles that are in NewRoles but not in the current role list
2) Intersect: displays all the roles that exist in both lists
3)  Union:  Does a join on the two lists.  Does not display duplicates.

This is a far cry from the days of iterating over the lists or implementing custom comparator functions.

Saturday, March 5, 2011

Checkbox-fu with JQuery

The current project that I am is a .net MVC 3 project.  The framework comes with the JQuery library for making client side controls easier to create.  I have only used YUI (2) before, so the concept of a javascript library is not uncommon to me.

I must say that I am really liking the JQuery library.  As with all libraries, it takes a bit to learn the in's and out's of how the library works.  In this post, I am going to show you how to create buttons that will check and uncheck all of the checkboxes on the page.

First of all, the code.
$(document).ready(function()
   {
    $("#all").click(function() {
     $(".checkboxes input:checkbox").each(function(){
               $(this).attr('checked','true');
               });
    }); // end all.click
    
    $("#none").click(function() {
     $(".checkboxes input:checkbox").each(function(){
               $(this).removeAttr('checked');
               });
    });
   });

Soccer
Hockey
Football
Basketball







If you view the source of this page, you should see the html mark up that I have up above.  It is pretty simple.  I have a div with a class of checkboxes.  Inside that are 4 checkboxes that are currently unchecked.  I then have a span with 2 buttons in it (a select all, and a clear).

The Javascript code itself is pretty easy (as is most things with JQuery).  Since I want my buttons to have functionality provided by the javascript, I need to ensure that my javascript runs after the elements that they affect load.  The easiest way to do this is to wrap all the javascript code in a document.ready block.  JQuery uses the $(document).ready function to accomplish this.

JQuery works on the idea of selectors.  You can use attributes such as id and class to act as selectors.  You can also use keywords such as form to accomplish the same thing.  Be careful if you have more than one form on your page.

Back to the code.  My buttons have an id of all or none respectively.  The line inside the function uses the JQuery selector methods to select multiple html elements.  In this case, it is looking inside any divs with the .checkboxes class.  From that, it will look for any input elements of type checkbox.  The .each method allows you to perform a function on each element in the array that is returned from the JQuery selector method.

The "this" keyword is pretty self explanatory in this case.  It clearly represents the DOM object that I am functioning on.  This is because I have called it form a .each method.  See http://remysharp.com/2007/04/12/jquerys-this-demystified/ for more information on the "this" keyword in JQuery.

The last piece of the puzzle is the attr and removeAttr methods.  Basically, this adds an attribute to the DOM object, or removes it.  These are handy methods if you want to quickly disable and re-enable buttons.  In the above case, I am using it to add the checked and remove the checked attribute.

JQuery can do a lot more than this, but this small example gives a small glimps of how powerful the framework is.  You can easily accomplish several tasks with just a couple lines of code.



Sunday, February 20, 2011

Course Review: Crucial Conversations

So recently I found myself in my performance review.  Everything I do is great.  My work is done on time, and meets requirements.  I constantly think ahead, incorporate new technologies to solve old problems, and think about the long term while building software.  What I don't do very well is communicate with others.  Hence my taking the crucial conversations course.

I found crucial conversations to be a great course/book.  What I loved about it is how the techniques described in the book promote candid responses.  They don't want you to flower things up.  They want you to be direct, and to "know what you want".  For example, say you rely on another member to get some work done.  The work that they produced is either incomplete, or insufficient.  You could go on a flame streak.  You could insult them, degrade them.  You could talk bad about them behind their back.  You could do all of that.  Or, you could step back, analyze what you really want, and approach the problem that way.  You could say to yourself, I really want to get this work done (more than insulting), how can I have a conversation that will lead to the real goal?

Like most courses of this kind, the real focus is on you.  You have to change the way you think.  You have to know what you want, and keep that paramount in your mind while conversing.  You have to master your stories.  Everyone sees the "truth" through the lens of their experience.  In your own mind, take the story you have created about an event and try to separate out fact from fiction.  For example:
"The supervisor hired the recruit.  The young man poured sand in the copier.  The boss found out that the copier was broken and fired the new recruit."

Even just examine the above story.  Did you think that the young man was the recruit?  Did the story actually say that, or did you just assume that to be true?  Is the boss the same person as the supervisor? Are you sure?

The one element that I really liked about the course was the tools they developed for actually having a crucial conversation.  You have to follow the STATE rule.
S ==> Share your facts
T ==> tell your story
A ==> Ask for others' path
T ==> talk tentatively
E ==> encourage testing

I really suggest that you read the book (or take the course) as it does go into a lot of depth about how to deal with people.  The tools provided will probably help you solve a lot of issues you may be dealing with in both the work and the personal life.

Sunday, February 6, 2011

Book Review: Joel on Software

I think that this book can clearly be labeled as an "oldie but a goodie".  Published in 2004, this book provides a neat perspective on software development.  This book was a great read providing good tips and thoughts on a large variety of topics.  I particularly liked his articles on the microeconomics of software development.  If you want to become good a designing products, then you have to understand what "designs" sell.  I know that some software developers out there would say that that is more of a job for BA's or project managers. I  disagree.  Life is all about the value you can add to your job.  Let me break into a story.

I was on the way to Toronto over Christmas, and was looking for a place to park in the Calgary international economy parking lot.  There were none.  I quickly hopped over to the park and jet.  I got on the shuttle to the airport to find just myself and the bus driver.  He was complaining about how the park and jet that he worked for never gave raises at all.  He said that he had been working there for a few years and never once received a raise at all.  At the time, I politely said that that was an unfortunate circumstance, and went on my way.  But it really got me thinking. Why do people automatically assume that they should get raises?  In my mind, everyone should get a cost of living increase.  I don't think that you should get paid less the next year for doing the same job (unless of course you have been demoted).  But why do people automatically think they deserve raises?  What value have you added to your organization to deem such a raise necessary?  Have you taken courses to increase your technology profile?  Have you contributed above and beyond your job description?

How this relates back to the book is that a lot of what he talks about in the book has nothing to do with traditional software development.  But keeping the topics in mind will help you add value to your job, and thus (hopefully) get you that raise.  I think that good managers can spot the difference between good and bad programmers.  I really feel that when you do your job properly, when you keep the big picture in mind, you do produce better programs, a better product, and a better customer experience.

If I had to summarize his book, I'd probably give the following points.

1)  The Joel Test
http://joelonsoftware.com/articles/fog0000000043.html
     - Do you use source control?
     - Can you make a build in one step?
     - Do you make daily builds?
     - Do you have a bug database?
     - Do you fix bugs before writing new code?
     - Do you have an up-to-date schedule?
     - Do you have a spec?
     - Do programmers have quite working conditions?
     - Do you use the best tools money can buy?
     - Do you have testers?
     - Do new candidates write code during their interview?
     - Do you do hallway usability testing?

The Joel test (as mentioned in the article) can be a quick guide to see where your software development practices are.  You can also use it to rate places to work.  You won't believe how many "15 year experience developers" who have never used source control.  Even myself, I have only really used subversion up until this point and have not really gotten into the world of DVCS (mostly because my employer uses subversion.,. could be worse.. could be the visual source-safe).

2) Specs
This is a disease that you need to get.  If you don't have it, infect yourself today!
http://www.joelonsoftware.com/articles/fog0000000036.html

3) Strategy letters
Basically interesting articles on how to build software that will sell, and a little insight into why some big companies do what they do.

All in all, this book was a really good read.  It was good to read a book on software that wasn't exactly about the software that we write, but more about why we write it. (And a little about how to write it too!)

Wednesday, January 19, 2011

Windows 7 64bit: ctrl-space switching to Chinese

For a while now I have had an issue using my favorite IDE's (eclipse, visual studio).  Every time I hit ctrl-space (auto complete) it would switch to Chinese characters.

It turns out that ctrl-space is windows hot-key used to switch keyboards.  If you have multiple keyboards installed (or say installed by default by your manufacturer who pre-loaded windows for you), the ctrl-space command would be intercepted by windows, and you would switch keyboards to the new language.  I don't know if this has anything to do with the fact that the Chinese keyboard was listed first in my properties.

In any event, I just stumbled upon this SO question and it seems to have solved my issues.

http://stackoverflow.com/questions/179119/how-to-prevent-windows-xp-from-stealing-my-input-ctrl-space-which-is-meant-for-em

Book Review: Release It!

Having spent most of my time reading purely technical books to solve specific problems, it was refreshing to read a higher-level book that talks about good programming design.  Release It! is by far one of the best books I have read and really opened my mind to areas of program design that I had only started to touch the surface on.

As it is one of the best forms of persuasion, I really enjoyed the story-telling in the book.  I felt that it gave the main content of this book depth and realism.  It is good to know that everyone makes these types of mistakes.

The main contents of this book are broken up into four distinct areas.  The first two areas describe stability and capacity patterns and anti-patterns.  It was interesting to read the unique challenges that large scale applications with millions of users face.  Things that we take for granted in smaller websites just wouldn't do in larger ones.  For example, say you want to stop a bot from scanning your site.  An easy way to do this is to force all the bots to establish a session before granting them access to content.  Most (read all) bots cannot hold a session since they are there purely for scrapping purposes.  The problem with this is that each session that is created because of a hit from a bot is stored somewhere in memory on a server.  Combine that with thousands of users trying to use your website at the same time, and we have a problem.  On small sites, you could probably get away with the memory being tied up for a specified timeout period, but on larger sites, you will not be so lucky.

The stability patterns described in the book are as follows:
1) Use timeouts
2) Circuit Breaker
3) Bulkheads
4) Steady State
5) Fail Fast
6) Handshaking
7) Test Harness
8) Decoupling Middleware

The capacity patterns:
1) Pool connections
2) Use Caching carefully
3) Precompute Content
4) Tune the garbage collector

He then goes on to describe some general design issues that has come up in his vast experience.  He talks a lot about developing SLA's with the business (or client) and then using those to help define the level of redundancy that a particular application needs.  He also talks a lot about administrating interfaces.  If you don't build applications that can be maintained, guess what, they won't be.  In the last section he talks about some coding designs that should be incorporated into any development process.  Those things include concepts like transparency.

It is interesting to be in this industry.  You see many projects directed towards delivering results with very little design up front.  The fact of the matter is, companies would probably save a lot of money if they spent more on the design time.  You have to understand that designs can be reused.  Chances are you are building applications a certain way because they fit your business style.  With that in mind, other applications that you develop will also meet that same style.  A bit of planning and design a head of time will have huge rewards in the long run and benefit projects down the line.  Another problem that I see is that consulting firms are hired based on price, but not on quality.  I sometimes feel that business have a hard time separating the good from the bad.  Did the project meet the requirements? maybe.... Did the project meet the budget? yes.... CHECK!  Little do they know that the application was not built in an extensible way.  It was not built with any future considerations.  It will need to be replaced in 3 years rather than 5.

In any event, I recommend that all developers read this book, and, most importantly, keep the concepts that he talks about in this book in the back of your mind when developing any application.  You will probably find that your application handles failure better, and delivers a better overall customer experience.

Saturday, January 15, 2011

Top 10 mistakes made in behavior change

So I was reading hacker news and found the following link.
http://www.slideshare.net/captology/stanford-6401325

I suggest that everyone read it. Although it is just a point form slide show, it really hits the spot with why I think many people (including myself) fail at certain goals. There are three points that I really want to touch on.

1)  Ignoring how environment shapes behavior
2)  Trying to stop old behaviors instead of creating new ones
3)  Underestimating the power of triggers

I will start with a little story.  My wife and I had set a goal of watching less TV.  This goal has obvious benefits.  For some reason, tho, we could never seem to get away from the TV.  The problem was this.  We lived in a condo, and as soon as you walked in, you were basically right at the TV.  It was hard to "sit on the couch" for a few minutes after work without turning on the TV.  I finally decided one day to move the TV from the "living room" to the "second bedroom".  Instantly, we have a nice, accessible area to just hang out, without the pressure of the TV also being in the same room.  I found it much easier to break away from wasting evenings completely watching TV.  It was great, and felt liberating.  It could not have happened if I had not realized how the environment I was in was affecting me.

Number 2 above really speaks to making positive, rather than negative goals.  For example.  The goal to want to stop smoking is really a negative goal.  You could, instead, set a goal to live a healthier life.  Of course that goal would have to be more specific than that, otherwise it would just be meaningless.  I really like they way the slideshow summarized this point, however.

Number 3 is really about understanding why you fail.  Lets say you eat junk food when you are stressed.  You make a goal to eat less junk food (or, alternatively, eat healthier) but sometimes you digress.  It is important to understand the "failure" and try to figure out what triggered you to digress.  Without understanding that, you can never get to the root of the problem.  The problem is not that you eat too much junk food, it is that you are too stressed in your life.

Thursday, January 13, 2011

mod_security, apache httpd, glassfish - Part 4

Part 4, mod_sec install.

One dependency that I missed downloading earlier was libXML2.  This is a dependency for mod_security.  If you wish, you can also install lua and curl.  Lua is needed if you plan to write your own rules and want to use the new lua engine.  I'm not really planning on writing rules, the base rules are pretty good.  As for curl, it is only needed if you want to send logs to a central repo like loglogics.  Since I don't have that kind of infrastructure setup, I won't worry about that in these posts.  If you wish to install those, simple follow the install instructions and then edit the configure to point to the locations in which you have installed the dependencies.

For now, navigate over to http://xmlsoft.org/downloads.html and download libXML2.

./configure --prefix=/path/to/deps/ --enable-shared 

Now we can install mod_security

CC="gcc -m64" ./configure --prefix=/path/to/deps/ --with apxs=/path/to/httpd/bin/apxs --with-pcre=/path/to/deps/bin/pcre-config --with-apr=/path/to/deps/bin/apr-1-config --with-apu=/path/to/deps/bin/apu-1-config --with-libxml=/path/to/deps/bin/xml2-config


When you untared the mod_security files, there was a rules directory.  We are going to copy that directory to /path/to/httpd/conf/rules.  Next, you are going to copy the modsecurity_crs_10_config.conf.example to the same file name minus the example part.  The purpose of this series of articles is not to go through an indepth setup of mod_security.  Just enough to get it working.  You are going to go into the file you just copied and make the following change.

#
# Review your SecRuleEngine settings.  If you want to
# allow blocking, then set it to On however check your SecDefaultAction setting
# to ensure that it is set appropriately.
#
#SecRuleEngine DetectionOnly 
SecRuleEngine On

This is basically going to turn the rules engine on.  You may want to put it on detection only if you are testing a legacy app.  We are going to create another file by the name of modsecurity_crs_10_global_config.conf.  In this file, we are going to basically setup some more mod_security rules.

SecServerSignature "Microsoft-IIS/6.0"
SecDebugLog /cardlock/httpd/logs/modsec_debug.log 
SecDebugLogLevel 3 

Basically this just sets the server to identify itself as IIS 6.  This is a little "security through obscurity".  Hopefully a few script kiddies will get deterred by this.

We need to add the following to our httpd.conf
<ifmodule security2_module>
 Include conf/rules/modsecurity_crs_10_global_config.conf
</ifmodule>

We need to now actually apply these rules to our site that we have just created.  In our site.conf, we should add the following.

Include conf/rules/modsecurity_crs_10_config.conf
Include conf/rules/base_rules/*.conf
Include conf/rules/optional_rules/*.conf

When you startup your httpd now, you may get some "syntax" errors inside the mod_security configuration files. I just go in an delete/comment out the line in question. This is obviously okay if you are not using the technology the line pertains to. For example, you can probably safely comment out any php rules if you are not using php in your website. One in particular is about a data file missing for comment spam. You can go and get it (say google) or you can just deal with the line that is causing the issue.

After you startup apache, you will notice that the modsec_debug.log is getting populated. Feel free to test out mod_security. You can either use a tool like w3af, or just try putting sql injection as a parameter. You should see mod_security block it.

Sunday, January 9, 2011

mod_security, apache httpd, glassfish - Part 3

Part three.

Just to recap, we have now installed Glassfish, Java, all the deps for apache httpd, and the httpd itself.  At this point you can startup apache httpd, and then try to navigate to the main page.  You should get a 403 Forbidden. This is because we have not configured apache yet.

The apache httpd configuration file (located in apache_httpd_root/conf/httpd.conf) is a very well documented file.  I'm not going to go through all of the details in the file, but I will touch on a few.  Ideally, you will make a copy of the httpd.conf and start with a fresh file.  From a security perspective, you want to have the configuration file to contain as little "junk" as possible.  This will assist when you are trying to figure out a problem with your server configuration.  It is just easier to read without all the explanations, plain and simple.

## Sample Apache httpd.conf configuration

ServerRoot "/path/to/apache/httpd"
ServerAdmin "your.address@you.com"
ServerTokens Full
ServerName myserver.myserveraddress.com
Listen 80

User apache
Group apache

LoadModule expires_module modules/mod_expires.so
LoadModule headers_module modules/mod_headers.so
LoadModule unique_id_module modules/mod_unique_id.so
LoadModule proxy_module modules/mod_proxy.so
LoadModule proxy_connect_module modules/mod_proxy_connect.so
LoadModule proxy_ftp_module modules/mod_proxy_ftp.so
LoadModule proxy_http_module modules/mod_proxy_http.so
LoadModule proxy_scgi_module modules/mod_proxy_scgi.so
LoadModule proxy_ajp_module modules/mod_proxy_ajp.so
LoadModule proxy_balancer_module modules/mod_proxy_balancer.so
LoadModule ssl_module modules/mod_ssl.so
LoadModule rewrite_module modules/mod_rewrite.so

DocumentRoot "/path/to/httpd/htdocs"

<Directory />
     Options FollowSymLinks
     AllowOverride None
     Order deny,allow
     Deny from all
</Directory>

ErrorLog "|/.../httpd/bin/rotatelogs -f /.../httpd/logs/error_log.%Y%m%d  86400"
TransferLog "|/.../httpd/bin/rotatelogs -f /.../httpd/logs/access_log.%Y%m%d 86400"

include conf/site.conf

So basically, the first section is just getting general information.  Don't worry about the ServerTokens Full just yet, we are going to use mod security to change our server signature in order to try and fool the script kiddies.  The next section sets up the user you want apache to run as.  If you want apache to run on port 80, we need to start apache as root.  The user/group defined in the file is what apache will switch to after it binds to port 80.  The load_modules section is simply loading all of our shared modules that we need.  The Directory tag is setting up the base permissions.  Basically, deny everyone for right now, until we get our site configured.  As mentioned before, apache has wicked documentation, so I suggest you read it.  Lets move on.

Our goal here is to get apache httpd talking to Glassfish.  Once we establish that, we can work on implementing the advanced features of mod_security.

There are two ways to get apache httpd to play nice with Glassfish.  One is to use mod_proxy.  The other is to use mod_jk.  Here is a very short list of pros and cons.

Mod_Proxy
pro:  Super simple to setup.
con:  Glassfish will see every request as originating from the apache httpd installation as opposed to someone out in the net.  This is bad if you want glassfish to handle some type of security based on ip addresses.

Mod_JK
pro:  More information provided to glassfish (ip address of originator etc.)  Works on a specific language to connect which is faster than mod_proxy.  Has built in support for workers to help support load.
cons:  Hard to setup

For the purpose of this article, we are just going to use mod_proxy.  I may come back later an make another post on using mod_jk.

We need to create a site.conf file with the following information.

<VirtualHost *:80>
  ProxyPass / http://localhost:8080
  ProxyPassReverse / http://localhost:8080
</VirtualHost>


Start up your Glassfish (and confirm that it is running).  Then start up your apache httpd.  You should be able to go to http://localhost and get served the glassfish page.

mod_security, apache httpd, glassfish - Part 2

Part deux!


Just a recap, part 1 took us through installing glassfish. We now have glassfish up and running on our system.
This part will focus on setting up the apache httpd.

1) Compile and install apr-1.4.2

tar xzf apr-1.4.2.tar.gz
cd apr-utils-1.3.10
CC="gcc -m64" ./configure --prefix=/path/to/deps --with-apr=/path/to/deps/bin/apr-1-config
make; make test
make install


The above code is going to be followed a lot (to a certain degree) so I'm only going to explain it this once.  Untar the archive.  Run the config script based on the parameters you want. The CC is compiler arguments and makes it compile for 64 bit (which is what my machine is).  Make the binary, test it, install it.
You will also want to update your bashrc again.

LD_LIBRARY_PATH=<path/to/deps/lib>

Don't forget to source.

2) Compile and install apr-utils


tar xzf apr-util-1.3.10.tar.gz
cd apr-utils-1.3.10
CC="gcc -m64" ./configure --prefix=/path/to/deps --with-apr=/paht/to/deps/bin/apr-1-config
make; make test
make install

3) Compile and install pcre

CC="gcc -m64" ./configure --prefix=/path/to/deps --enabled-shared




5) Compile and install apache httpd
Now that we have installed all the dependencies, we need to configure and install the httpd.  There are a couple of security issues that come up during the installation phase.  Most admins would say (and I'm sure there is a "principle" written about it somewhere) that you should only compile what you actually need.  Apache httpd comes with several built in modules that you would never ever use in a reverse proxy situation.  You could approach this two ways.
1)  Build all the modules with the --enable-shared=all options.  This will build all modules as shared.  Then you can simply edit your config as to which modules you want to load or not.
2)  Use the configure script to only compile the modules you need
There are pros and cons to both approaches.  With (1), all the modules are still there.  If an attacker somehow got access to the conf directory, they could enable modules that could cause futher holes. More realistically, however, someone could accidently turn on a module, and that module could have a vulnerability.  Option two allows you to have complete control on what is actually on the system.  If you ever needed an additional module, however, you would have to recompile to build it.  Because I'm game for pain, I'm going to go with option 2.





C="gcc -m64" ./configure --disable-autoindex --disable-auth-basic --disable-cgi --disable-cgid --disable-userdir --enable-expires=shared --enable-headers=shared --enable-proxy=shared --enable-proxy-http=shared --enable-rewrite=shared --enable-so --enable-ssl=shared --enable-unique-id=shared --with-apr=/path/to/deps/bin/apr-1-config --with-apr-util=/path/to/deps/bin/apu-1-config --with-pcre=/path/to/deps/bin/pcre-config --with-ssl=/path/to/deps --prefix=/path/to/root/httpd-2.2.17



You can go through the configure options on your own if you wish.  The only real requirements are ssl, rewrite, proxy, proxy-http, and unique-id.
After you have that all installed, you will want to run apache to ensure that it starts up.  Because apache is currently configured for port 80, you will have to run it as root.  Ideally you would get a startup script and sudo access to run it, but in this case, just su to root and run apachectl from the bin directory.  You should get a Forbidden error message when you type localhost into your browser.
In the next part, I will talk about configuring apache httpd and using mod_proxy to connect to glassfish.