Monthly Archives March 2010

SunSpider Browser Benchmarks

Posted by Marius Dornean on March 23, 2010  /   Posted in Technology

Web 2.0

If you don’t already know, most of the web 2.0 technologies today are built on top of javascript. Javascript is scripting language that browsers use to perform the more advanced features of a website. All of the pretty CCS layouts, the XML traffic, and of course the constant tweeting and face-booking would not be possible without this language and the support browsers provide for it.

Browsers

As most of us know, different browsers render pages differently, some are more efficient then others at different tasks, and some are more secure. Some of the bigger players come pre installed on the os of your choosing, while others sit quietly behind the scenes and wait for the few followers to download them. In any case, one thing is universal, each and every browser is ultimately different then the other.

Benchmarking

Benchmarking is a way to test the speed and efficiency of a task. Benchmarking javascript, up until recently, was almost impossible. There have been lots of micro benchmarks here and there, but no true test that sums of the speed of the javascript engine in a browser.

WebKit Open Source Project

Webkit.org is the home of the open source web browser engine developer by Apple, currently used in Safari and multiple OS X applications. One of the tools it provides is the SunSpider Javascript Benchmark, a true representation of a browsers javascript performance. By testing multiple facets of a browsers javascript processing speed, we can gain a balanced real world comparison between the different processing speeds different browsers offer.

Chart in milliseconds (Lower means quicker – total was divided by 5 to fit nicely in chart)

Raw Numbers

3D Access Bitops ControlFlow Crypto Date Math Regexp String Total
Internet Explorer 9 Beta 440.6 544.6 489.4 86.6 245 183.2 270.8 39 466.4 2765.4
Internet Explorer 8 668.4 929.6 725.4 139 389.8 474.2 595 208 1012 5141.6
Firefox 3.6 165.4 165.4 51.8 47 65.8 164.4 68.6 56.8 279 1064.2
Safari 4.0 71.8 60 37.4 6.2 40 78.4 61.6 26.2 210.6 592.2
Chrome 4.1 81.6 43.4 48.4 3.4 38.4 67.6 504 18 193.8 545
Opera 10.51 62.2 53 23.2 5.8 30 69 56.6 15.6 153.6 469

Conclusion

As you can see, not all browsers perform at the same speed. Lagging far behind is Microsofts Internet Explorer 8. While IE 9 promises, and in tech previews proves to have made huge leaps in closing the gap behind the other leading browsers, it suffers from slow release cycles. The other leading browsers push routine updates which enables the end user to browse at the fastest speeds available while IE leaves months, or years in between version releases.

The true test of speed is yet to come. As more Twitter and Facebook sites pop up, and web 3.0 inches closer, it is up to the browser developers to ensure they keep up with the new demands for speed and efficiency.

C# 4.0 – Optional Parameters, Default Values, and Named Parameters

Posted by Marius Dornean on March 22, 2010  /   Posted in Technology, Web Development

Parameters

Parameters are simply values that can be passed into a method. A method specifies what parameters, along with their type, it accepts and expects. Traditionally in C#, when we want to accept different sets of parameters or set default values, we would overload methods and supply the information by chaining the calls. This can be messy and can cause a lot of variations of a method. Keeping track of default values can also become a challenge.

static void Main(string[] args)
{
CreatePerson(“Marius”);
CreatePerson(“Marius”, 24);
CreatePerson(“Marius”, 24, “USA”);
}

static void CreatePerson(string Name)
{
CreatePerson(Name, 24, “USA”);
}

static void CreatePerson(string Name, int Age)
{
CreatePerson(Name, Age, “USA”);
}

static void CreatePerson(string Name, int Age, string Location)
{
//Logic…
}

Default Values & Optional Parameters in C# 4.0
In order to deal with the default value issue, C# 4.0 has introduced default values. By assigning a default to a parameter, we don’t require the value to be passed in which automatically makes it optional.

static void Main(string[] args)
{
CreatePerson(“Marius”);
CreatePerson(“Marius”, 24);
CreatePerson(“Marius”, 24, “USA”);
}

static void CreatePerson(string Name, int Age = 20, string Location= “USA”)
{
//Logic…
}

Named Parameters

When you have a method that has multiple optional parameters and you want to provide the value of only one, you have to use named parameters. These are quite simply parameters formatted as [Parameter Name]: Value

static void Main(string[] args)
{
CreatePerson(“Marius”, BirthPlace: “Other Location”);
CreatePerson(“Marius”, Age:26);
CreatePerson(“Marius”, BirthPlace: “Other Location”, Age: 26);
}

static void CreatePerson(string Name, int Age = 24, string BirthPlace = “USA”)
{
//Logic…
}
Visual Studio & Conditions

The only conditions placed on the developer is that optional parameters are to be declared at the end of the method arguments, after all of the full parameters have been declared. This means that when calling the methods, all full parameters must be passed in like a normal method, and then the named parameters can be given for the optional default parameters in any order.

Visual Studio 2010 adds supports for these new language features very nicely. As you can see below, the default values for the optional parameters are shown, and intellisense supports the selection of the named paramaters as expected.

VS 2010 Optional Parameters

Other Thoughts
As with each iteration of the C# language, Optional Parameters, Named Parameters, and Default Values give even more control to the C# developer. These new features will save a lot of overloading and method chaining, and will undoubtebly save a lot of developers from the “factory method” value instatiation.

HAPPY CODING!

Securely Deleting Files From Your Hard Drive

Posted by Marius Dornean on March 08, 2010  /   Posted in Security, Technology

Deleting Files (or not…)

When deleting files in Windows, only the pointer to the file is deleted from the file table, not the actual data. Think of it like an index in a book. If the index pointer for a particular page is removed, it is much harder to find the particular page. If we go through the book, page by page, we will eventually find the page we want without needing the index to help us. Using free and commercial software recovery tools such as Recuva, we can recover deleted files from a hard drive by scouring all of the bits on a hard drive, much like flipping through all of the pages of a book. For this reason, it is important that we ensure that files we want deleted are fully stripped from the hard drive.

How The Disk Scrubber Works

The MariusSoft Disk Scrubber leverages the power of the windows cipher utility to cleanly wipe deleted files. This process is accomplished in 3 steps. First, 0’s are written over all of the deleted files. This is followed by 1’s being written, and finally random 0’s and 1’s. This 3 step process ensures that sectors are obfuscated enough to where the deleted files are no longer recognizable by recovery software.

Video Presentation

Get your hands on the Disk Scrubber here.

Introduction to Data Security

Posted by Marius Dornean on March 03, 2010  /   Posted in Security, Technology

Data in the Digital Age

What is data? Simply put, data is information stored in digital form. Why is information so important? Simple, information is the key to modern day society. Information enables us to share ideas, make informed decisions, keep records, speed up processes, etc… Data storage and transfer is more prevalent today then it has ever been as the medium of choice for information transfer. The biggest challenge is no longer getting data from one person to another, but securing that data.

With the introduction of the internet and the movement of storing more and more data onto computer systems, the electronic security age began and has flourished ever since. There are countless of entities all over the world trying to gain unauthorized access to data on every kind of system imaginable, and at the same time there are experts countering these entities.

History of the Internet

In order to gain a better understanding of the internet and interconnected computer systems, one should look at its roots. The first rudimentary computer network that linked geographically separated computer systems was called Arpanet. Arpanet stands for (Advanced Research Projects Agency Network) and was created by DARPA (Defense Advanced Research Projects Agency). The network linked computer systems from universities across the US together. It was the first network to use packet switching, a communications method where data is transmitted in groups rather than the slower, less reliable circuit switching that was prevalent at that time.

As the network grew, more and more people gained access to transferring more data between each other. This brought many advantages and many security concerns. As people started transferring sensitive data, those wishing to gain access to that data illegally started creating ways to do so.

History of Hacking

The modern day term of the words ‘hack’ and ‘hacker’ was first widely introduced in the 1960′s and originated at MIT. Simply, hacking referred to students who created a quick and elaborate and/or bodged solution to a technical obstacle. The term hacking is now almost synonymous with unauthorized access to computer systems, not just by students but by anyone. While hacking does have a rather dark modern day meaning, it does semantically apply to other forms of legal hacking, ex hackaday.com.

Some Notable Hacks in History

1983:

Kevin Poulsen aka Dark Dante hacks into Arpanet, the grandfather to the modern day internet. While still a student, Poulsen found a loophole in Arpanet’s architecture and exploited it to gain temporary control of the US wide network.

1988:

Robert Morris, a 23 year old Cornell University Graduate student creates the first internet worm. Created with the intent to count how many computers existed on the internet at the time, he creates a program with just 99 lines of code. In order to bypass system administrators to gauge the size correctly, he includes code to evade the administrators and exploit several vulnerabilities in the computer systems. The worms spread rapidly, infecting thousands of computers, crashing them and causing huge potential loss in productivity.

1995:

Vladimir Levin, a Russian computer hacker was the first to attempt to hack into a bank. He hacked into Citibank and managed to transfer $10 million dollars into accounts across the world.

Increasing Amount of Data Accessible via the Internet

According to netcraft, there are about 190,000,000 (190 million) websites on the internet, with this number increasing faster and faster every year. This is not surprising given there are nearly 1.6 million programmers in the world with more companies pushing internet based electronic services. The more websites and systems exist that have a connection to secure data and are reachable via the internet, the more chances there are that the data will be compromised.

As companies expand their presence and services on the web, more and more dynamic data is becoming available on the internet (online banking, social networking, accounting and tax software, etc…). Dynamic websites that provide these services, both personal and business, usually store some kind of identifiable information that can be monetized by hackers and spam organizations. Whether it be email addresses, names, social security numbers, credit card numbers, corporate research, etc… this data is sought by those that wish to sell it or use it for other unlawful means or exploitation.

Any system that is connected to the internet that has any kind of sensitive data worth securing is usually at risk of being attacked. This is the reality of today’s data exchange landscape and one that all, not just developers and system administrators, must think about. Every time you send your name, email address, or any other type of information over to a website, you risk your data getting compromised and stolen.

Data Breaches

Modern day governance take hacking and data breaches very seriously. Depending on the specific industry, some companies are required to report any hacking/data breach incidents. Huge amounts of money are spent into research and equipment to stop hackers.

Everything from network level firewalls, intrusion detection systems, web application firewalls to password protected accounts, database security triggers, and application security frameworks are modern day countermeasures to try and prevent hackers from gaining unauthorized access to data.

Securing Data

Over the next couple of blogs, I will talk about the different types of security. The following are some of the different topics I will cover.

SQL server security
Web application security
Windows application security
.NET code execution security
Network level security
Social Engineering attacks and security awareness
Recovering from a breach of data security
Hard Drive File Deletion
Stay tuned!

^ Back to Top