Feature suggestion: journal file multiplication.

classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

Feature suggestion: journal file multiplication.

Andrew Frolov
Hi!

While operating bitronix in production we have to implement proper backup strategy. And we have to backup bitronix journals somehow. Suppose something goes wrong with our HDD(SSD, RAID, whatever) while writing in btxlog. Journal will be broken and btx will not be able to make automatic recovery. There is no easy way from situation, we can't even make proper recovery manually.

My suggestion is to implement something like "Redo Log Group" from Oracle. The idea is to make several log files instead of one, place this files on separate drives and write everything in all files in parallel. If one drive break we can recover from log copy from second drive. It might looks like RAID, but without raid controller which can be a point of failure.

http://www.dba-oracle.com/concepts/online_redo_logs.htm
http://www.oracledistilled.com/oracle-database/administration/multiplexing-the-redo-log-files/

What do you think?
Reply | Threaded
Open this post in threaded view
|

Re: Feature suggestion: journal file multiplication.

Brett Wooldridge-2
Hi Andrew,

I think it is an excellent idea.  The new journal uses non-blocking NIO and so should be adaptable to multiple writes without much performance impact.  Do you think you are able to contribute the code to the project?

Brett



On Tue, Jan 21, 2014 at 11:54 PM, Andrew Frolov <[hidden email]> wrote:
Hi!

While operating bitronix in production we have to implement proper backup
strategy. And we have to backup bitronix journals somehow. Suppose something
goes wrong with our HDD(SSD, RAID, whatever) while writing in btxlog.
Journal will be broken and btx will not be able to make automatic recovery.
There is no easy way from situation, we can't even make proper recovery
manually.

My suggestion is to implement something like "Redo Log Group" from Oracle.
The idea is to make several log files instead of one, place this files on
separate drives and write everything in all files in parallel. If one drive
break we can recover from log copy from second drive. It might looks like
RAID, but without raid controller which can be a point of failure.

http://www.dba-oracle.com/concepts/online_redo_logs.htm
http://www.oracledistilled.com/oracle-database/administration/multiplexing-the-redo-log-files/

What do you think?



--
View this message in context: http://bitronix-transaction-manager.10986.n7.nabble.com/Feature-suggestion-journal-file-multiplication-tp1599.html
Sent from the Bitronix Transaction Manager mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe from this list, please visit:

    http://xircles.codehaus.org/manage_email



Reply | Threaded
Open this post in threaded view
|

Re: Feature suggestion: journal file multiplication.

Andrew Frolov
I will certainly try, we need this feature to launch our project. It is good to hear, that you approve this feature.

After revising the code I think it could be not that hard. All I have to do is to write my own version of TransactionLogAppender which will store some old TransactionLogAppender's and do parallel writes and parallel reads with consistency check.
Reply | Threaded
Open this post in threaded view
|

Re: Feature suggestion: journal file multiplication.

Ludovic Orban-2
If you find that solution appealing to you, by all means go ahead implementing it and contributing it back, that would be awesome!

But I'm not sure this would buy you much, or maybe I don't really understand how that would work in practice. If you have two journal replicas, which one should you read in case of a recovery? And there's no guarantee that they're both identical as you could experience a crash after one copy has been updated but before the other had a chance to.

How would that work in those situations?



On Tue, Jan 21, 2014 at 4:37 PM, Andrew Frolov <[hidden email]> wrote:
I will certainly try, we need this feature to launch our project. It is good
to hear, that you approve this feature.

After revising the code I think it could be not that hard. All I have to do
is to write my own version of TransactionLogAppender which will store some
old TransactionLogAppender's and do parallel writes and parallel reads with
consistency check.



--
View this message in context: http://bitronix-transaction-manager.10986.n7.nabble.com/Feature-suggestion-journal-file-multiplication-tp1599p1601.html
Sent from the Bitronix Transaction Manager mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe from this list, please visit:

    http://xircles.codehaus.org/manage_email



Reply | Threaded
Open this post in threaded view
|

Re: Feature suggestion: journal file multiplication.

Andrew Frolov
I case of recovery I will have
- one normal log, one corrupted (wrong crc)  or lost.
- both corrupted
- both normal and equal
- both normal and not equal. In that case conflict resolution can be based on transaction id or transaction timestamp.