Product Upgrades – What Not To Do – Learn From Someone Elese’s Mistakes

Posted by Marius Dornean on September 05, 2012  /   Posted in Informational, Technology

On July 31st Microsoft started rolling out Outlook.com, their new Windows 8 themed web based email service slated to replace msn.com, hotmail.com, and live.com email accounts. In their own words, they promise to:

Microsoft today introduced Outlook.com, a new personal email service that reimagines the way that people use email – from a cleaner look, to fewer and less obtrusive ads, to new connections to social media sites like Facebook and Twitter.

As seen in some articles such as this one, many users have been rather frustrated with Microsoft’s roll-out of this new service. The concerns seem to range from auto-upgrade without asking for permission, to loss of emails.

While these are serious concerns, I have been experiencing incredibly flaky service with the new system, which to me, points to stability issues. I frequently have to refresh my web page about 5 times until I can view my emails and even then, I can only view one or two emails before I am booted out due to a system error, my Facebook and twitter feeds don’t show up, my read emails still show up as unread, my deleted emails magically resurface, and much more!

Read More

Communicating Technology Initiative Business Value Across Business Units

Posted by Marius Dornean on September 05, 2012  /   Posted in Communication, Informational, Personal, Technology

Preface

The true value of a software Architect in an organization is the ability to translate the business’ vision and strategy into effective enterprise change, usually starting with the solution architecture. This means that an Architect needs to truly understand the business drivers and how they lead to features at the product level in order to successfully guide a product’s architecture road-map. Inversely, an Architect should be able to communicate both the technical and business aspects of this strategy to any audience, technical or otherwise.

The communication style used to deliver the message to distinct groups varies widely based on the audience’s duties. Technical folks focus on the technological implications of the architecture strategy and are usually more interested in the how rather than the why. If you take a group of developers they would rather know how their specific areas of concern are affected and what they need to focus on.
Read More

SQL Server 2005 & 2008 – Unable to start job (reason: syntax error)

Posted by Marius Dornean on April 16, 2010  /   Posted in Uncategorized

Error Message

Unable to start execution of step 1 (reason: line(1): Syntax error).  The step failed.

The Problem

SQL renames or re-saves the package job after editing causing the package location to be inaccessible.

The Solution

Append a backslash ‘\’ to the package name

1. Navigate to your job  (Database Server – >SQL Server Agent -> Jobs -> Your Job) and right click, choose properties

2. Click on steps in the job properties, choose the first step in the job step list, click edit below the job list.

3. Add a backslash ‘\’ to the package name and save

HAPPY CODING!

Windows Application, Processes & Threads

Posted by Marius Dornean on April 16, 2010  /   Posted in Technology, Web Development

How does Windows allocate CPU time?

CPU (Central Processing Unit) time is incredibly valuable. It determines how much and how fast code can be executed (in the context of length of time in regards to amount). Because of this, the operating system has to be conscious of how it split processing time between the different programs and services that are running. In order to make this a reality, various structures are needed to represents the executing code.

Applications – Processes – Threads

An application consists of one or more processes. A process is an executing program. One or more threads run in the context of the process. A thread is the basic unit to which the operating system allocates processor time. A thread can execute any part of the process code, including ?parts currently being executed by another thread. All threads in a process share the same memory and system resources. Each CPU core can execute one thread at a time, so multi core systems can execute as many threads in parallel at the same exact time as the amount of CPU cores available.

To recap, a threads make up a process which makes up an application. The simplest application will have one process consisting of one thread. Multi threaded applications can have multiple processes with multiple threads.

Thread Pools

Creating and terminating a thread is very time consuming. Memory allocation, thread management, and processing time are all involved in handling multiple threads. The more threads an application has, the more of these resources it will consume.

Some applications are written to take advantage of multiple threads and run processes (not windows process, but a literal one) in parallel. Some of these processes are very short lived and make the cost of creating and terminating the thread very expensive. In a way, the ROI is not high enough.

In order to make multi threading less expensive resource wise, thread pools were introduced. Thread pools are collections of worker threads that handle processes on behalf of the main thread.

A good analogy to this would be a restaurant. A new waiter (thread) would have to be hired every time a customer walked in the door. The waiter would take the order, bring the food, charge the fee, and do other tasks. When the waiter would be done serving that customer, they would be fired (thread terminated) and a new waiter hired for the next customer. This is incredibly inefficient if there are many customers and they spend very little time in the restaurant. A thread pool would be like having a staff of waiters ready to serve new customers as they walked in.

Process & Thread Priority

The operating system dictates how much processing time is given to each process based on the process priority. In windows, you can change the priority of a running process by using the Windows Task Manager and the processes tab. Similarly, processes can change the priority of the treads that make it up. By assigning a higher priority, that particular thread would get more processing time allocated to it.

A good use of this would be in an application that has heavy background processing. The thread responsible for the UI should have a higher priority then the thread which is running background processes. This would ensure that the application is responsive to the user input.

Memory Sharing

While multiple threads can execute the same exact code at the same time, they can not read and write to the same memory space. Safeguards are built in to ensure that memory access by multiple threads don’t overlap. Race conditions are one of the biggest problems in multi threaded applications. When one thread starts reading or writing to a piece of memory, other threads have to wait for it to finish.

An example problem would be two threads needing to read the same memory. Thread A starts reading some memory, but can’t finish until thread B sends a value back that thread A needs in order to finish its calculation. Thread B, in order to send the value back, needs to read the memory that Thread A has locked. Both threads are now waiting for each other and are in a deadlock.

Thread Timing

Windows splits up processing time between threads based on priority. As each thread gets a given amount of clock cycles to process. If the thread finishes processing before the time is up, it returns the processing time to the OS. The extra time is then added to the pool of time and split among the other threads. If after every thread has processed there is extra time left over, the time is allocated to the System Idle Process that can be seen with Windows Task Manager.

Multi Threading Considerations

Problems such as the deadlock and race conditions described are some of the challenges that face multi thread applications. Taking advantage of powerful systems with multiple cores capable of running multiple threads at once increases processing speed, but developers have a lot more of challenges to work with. The timing or threads, the work they do, and the memory they share are all thrown into the equation when designing multi threaded applications.

HAPPY CODING!

SunSpider Browser Benchmarks

Posted by Marius Dornean on March 23, 2010  /   Posted in Technology

Web 2.0

If you don’t already know, most of the web 2.0 technologies today are built on top of javascript. Javascript is scripting language that browsers use to perform the more advanced features of a website. All of the pretty CCS layouts, the XML traffic, and of course the constant tweeting and face-booking would not be possible without this language and the support browsers provide for it.

Browsers

As most of us know, different browsers render pages differently, some are more efficient then others at different tasks, and some are more secure. Some of the bigger players come pre installed on the os of your choosing, while others sit quietly behind the scenes and wait for the few followers to download them. In any case, one thing is universal, each and every browser is ultimately different then the other.

Benchmarking

Benchmarking is a way to test the speed and efficiency of a task. Benchmarking javascript, up until recently, was almost impossible. There have been lots of micro benchmarks here and there, but no true test that sums of the speed of the javascript engine in a browser.

WebKit Open Source Project

Webkit.org is the home of the open source web browser engine developer by Apple, currently used in Safari and multiple OS X applications. One of the tools it provides is the SunSpider Javascript Benchmark, a true representation of a browsers javascript performance. By testing multiple facets of a browsers javascript processing speed, we can gain a balanced real world comparison between the different processing speeds different browsers offer.

Chart in milliseconds (Lower means quicker – total was divided by 5 to fit nicely in chart)

Raw Numbers

3D Access Bitops ControlFlow Crypto Date Math Regexp String Total
Internet Explorer 9 Beta 440.6 544.6 489.4 86.6 245 183.2 270.8 39 466.4 2765.4
Internet Explorer 8 668.4 929.6 725.4 139 389.8 474.2 595 208 1012 5141.6
Firefox 3.6 165.4 165.4 51.8 47 65.8 164.4 68.6 56.8 279 1064.2
Safari 4.0 71.8 60 37.4 6.2 40 78.4 61.6 26.2 210.6 592.2
Chrome 4.1 81.6 43.4 48.4 3.4 38.4 67.6 504 18 193.8 545
Opera 10.51 62.2 53 23.2 5.8 30 69 56.6 15.6 153.6 469

Conclusion

As you can see, not all browsers perform at the same speed. Lagging far behind is Microsofts Internet Explorer 8. While IE 9 promises, and in tech previews proves to have made huge leaps in closing the gap behind the other leading browsers, it suffers from slow release cycles. The other leading browsers push routine updates which enables the end user to browse at the fastest speeds available while IE leaves months, or years in between version releases.

The true test of speed is yet to come. As more Twitter and Facebook sites pop up, and web 3.0 inches closer, it is up to the browser developers to ensure they keep up with the new demands for speed and efficiency.

C# 4.0 – Optional Parameters, Default Values, and Named Parameters

Posted by Marius Dornean on March 22, 2010  /   Posted in Technology, Web Development

Parameters

Parameters are simply values that can be passed into a method. A method specifies what parameters, along with their type, it accepts and expects. Traditionally in C#, when we want to accept different sets of parameters or set default values, we would overload methods and supply the information by chaining the calls. This can be messy and can cause a lot of variations of a method. Keeping track of default values can also become a challenge.

static void Main(string[] args)
{
CreatePerson(“Marius”);
CreatePerson(“Marius”, 24);
CreatePerson(“Marius”, 24, “USA”);
}

static void CreatePerson(string Name)
{
CreatePerson(Name, 24, “USA”);
}

static void CreatePerson(string Name, int Age)
{
CreatePerson(Name, Age, “USA”);
}

static void CreatePerson(string Name, int Age, string Location)
{
//Logic…
}

Default Values & Optional Parameters in C# 4.0
In order to deal with the default value issue, C# 4.0 has introduced default values. By assigning a default to a parameter, we don’t require the value to be passed in which automatically makes it optional.

static void Main(string[] args)
{
CreatePerson(“Marius”);
CreatePerson(“Marius”, 24);
CreatePerson(“Marius”, 24, “USA”);
}

static void CreatePerson(string Name, int Age = 20, string Location= “USA”)
{
//Logic…
}

Named Parameters

When you have a method that has multiple optional parameters and you want to provide the value of only one, you have to use named parameters. These are quite simply parameters formatted as [Parameter Name]: Value

static void Main(string[] args)
{
CreatePerson(“Marius”, BirthPlace: “Other Location”);
CreatePerson(“Marius”, Age:26);
CreatePerson(“Marius”, BirthPlace: “Other Location”, Age: 26);
}

static void CreatePerson(string Name, int Age = 24, string BirthPlace = “USA”)
{
//Logic…
}
Visual Studio & Conditions

The only conditions placed on the developer is that optional parameters are to be declared at the end of the method arguments, after all of the full parameters have been declared. This means that when calling the methods, all full parameters must be passed in like a normal method, and then the named parameters can be given for the optional default parameters in any order.

Visual Studio 2010 adds supports for these new language features very nicely. As you can see below, the default values for the optional parameters are shown, and intellisense supports the selection of the named paramaters as expected.

VS 2010 Optional Parameters

Other Thoughts
As with each iteration of the C# language, Optional Parameters, Named Parameters, and Default Values give even more control to the C# developer. These new features will save a lot of overloading and method chaining, and will undoubtebly save a lot of developers from the “factory method” value instatiation.

HAPPY CODING!

Securely Deleting Files From Your Hard Drive

Posted by Marius Dornean on March 08, 2010  /   Posted in Security, Technology

Deleting Files (or not…)

When deleting files in Windows, only the pointer to the file is deleted from the file table, not the actual data. Think of it like an index in a book. If the index pointer for a particular page is removed, it is much harder to find the particular page. If we go through the book, page by page, we will eventually find the page we want without needing the index to help us. Using free and commercial software recovery tools such as Recuva, we can recover deleted files from a hard drive by scouring all of the bits on a hard drive, much like flipping through all of the pages of a book. For this reason, it is important that we ensure that files we want deleted are fully stripped from the hard drive.

How The Disk Scrubber Works

The MariusSoft Disk Scrubber leverages the power of the windows cipher utility to cleanly wipe deleted files. This process is accomplished in 3 steps. First, 0’s are written over all of the deleted files. This is followed by 1’s being written, and finally random 0’s and 1’s. This 3 step process ensures that sectors are obfuscated enough to where the deleted files are no longer recognizable by recovery software.

Video Presentation

Get your hands on the Disk Scrubber here.

Introduction to Data Security

Posted by Marius Dornean on March 03, 2010  /   Posted in Security, Technology

Data in the Digital Age

What is data? Simply put, data is information stored in digital form. Why is information so important? Simple, information is the key to modern day society. Information enables us to share ideas, make informed decisions, keep records, speed up processes, etc… Data storage and transfer is more prevalent today then it has ever been as the medium of choice for information transfer. The biggest challenge is no longer getting data from one person to another, but securing that data.

With the introduction of the internet and the movement of storing more and more data onto computer systems, the electronic security age began and has flourished ever since. There are countless of entities all over the world trying to gain unauthorized access to data on every kind of system imaginable, and at the same time there are experts countering these entities.

History of the Internet

In order to gain a better understanding of the internet and interconnected computer systems, one should look at its roots. The first rudimentary computer network that linked geographically separated computer systems was called Arpanet. Arpanet stands for (Advanced Research Projects Agency Network) and was created by DARPA (Defense Advanced Research Projects Agency). The network linked computer systems from universities across the US together. It was the first network to use packet switching, a communications method where data is transmitted in groups rather than the slower, less reliable circuit switching that was prevalent at that time.

As the network grew, more and more people gained access to transferring more data between each other. This brought many advantages and many security concerns. As people started transferring sensitive data, those wishing to gain access to that data illegally started creating ways to do so.

History of Hacking

The modern day term of the words ‘hack’ and ‘hacker’ was first widely introduced in the 1960′s and originated at MIT. Simply, hacking referred to students who created a quick and elaborate and/or bodged solution to a technical obstacle. The term hacking is now almost synonymous with unauthorized access to computer systems, not just by students but by anyone. While hacking does have a rather dark modern day meaning, it does semantically apply to other forms of legal hacking, ex hackaday.com.

Some Notable Hacks in History

1983:

Kevin Poulsen aka Dark Dante hacks into Arpanet, the grandfather to the modern day internet. While still a student, Poulsen found a loophole in Arpanet’s architecture and exploited it to gain temporary control of the US wide network.

1988:

Robert Morris, a 23 year old Cornell University Graduate student creates the first internet worm. Created with the intent to count how many computers existed on the internet at the time, he creates a program with just 99 lines of code. In order to bypass system administrators to gauge the size correctly, he includes code to evade the administrators and exploit several vulnerabilities in the computer systems. The worms spread rapidly, infecting thousands of computers, crashing them and causing huge potential loss in productivity.

1995:

Vladimir Levin, a Russian computer hacker was the first to attempt to hack into a bank. He hacked into Citibank and managed to transfer $10 million dollars into accounts across the world.

Increasing Amount of Data Accessible via the Internet

According to netcraft, there are about 190,000,000 (190 million) websites on the internet, with this number increasing faster and faster every year. This is not surprising given there are nearly 1.6 million programmers in the world with more companies pushing internet based electronic services. The more websites and systems exist that have a connection to secure data and are reachable via the internet, the more chances there are that the data will be compromised.

As companies expand their presence and services on the web, more and more dynamic data is becoming available on the internet (online banking, social networking, accounting and tax software, etc…). Dynamic websites that provide these services, both personal and business, usually store some kind of identifiable information that can be monetized by hackers and spam organizations. Whether it be email addresses, names, social security numbers, credit card numbers, corporate research, etc… this data is sought by those that wish to sell it or use it for other unlawful means or exploitation.

Any system that is connected to the internet that has any kind of sensitive data worth securing is usually at risk of being attacked. This is the reality of today’s data exchange landscape and one that all, not just developers and system administrators, must think about. Every time you send your name, email address, or any other type of information over to a website, you risk your data getting compromised and stolen.

Data Breaches

Modern day governance take hacking and data breaches very seriously. Depending on the specific industry, some companies are required to report any hacking/data breach incidents. Huge amounts of money are spent into research and equipment to stop hackers.

Everything from network level firewalls, intrusion detection systems, web application firewalls to password protected accounts, database security triggers, and application security frameworks are modern day countermeasures to try and prevent hackers from gaining unauthorized access to data.

Securing Data

Over the next couple of blogs, I will talk about the different types of security. The following are some of the different topics I will cover.

SQL server security
Web application security
Windows application security
.NET code execution security
Network level security
Social Engineering attacks and security awareness
Recovering from a breach of data security
Hard Drive File Deletion
Stay tuned!

SQL SERVER 2005 & 2008 Quick Deletes in Large Tables

Posted by Marius Dornean on February 26, 2010  /   Posted in Technology

SQL Transaction Logging

By default, any operations in SQL Server such as inserts, updates, and deletes are stored in the transaction log. The principle behind the transaction log is to revert the operation if an error occurs and to ship the logs for backup purposes. When you have a huge table and you initiate a delete command on all of the rows, SQL will delete each row after logging it into its transaction log.

The Problem

Logging takes a long time and the log file will grow as each record is deleted from the database. If an error is encountered, even at the last row to be deleted, all of the transactions will need to be reverted. The process of reverting the delete operations in itself takes a significant amount of time.

The Solution

SQL Server 2005 and grater enhanced the functionality of the TOP operator. It can be used in conjunction with insert, update, and delete statements. By leveraging the top function and deleting a limited set at a time, we can minimize the amount of records to be reverted in case of an error, issue less locks on the table making it more usable while the operation is taking place, and allowing the log file space to be re-used.

The following code performs a batch deletion of records 10,000 at a time:

While (select count(*) from TABLENAME) > 0
Begin
Delete Top(10000) from TABLENAME
If @@rowcount == 0 break
End
Just replace TABLENAME with the name of the table you are deleting and run the statement against your database.

HAPPY CODING!

^ Back to Top